SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I don't even know where to begin but I guess let's start here openi researchers including Ally of Ilia sover fired for leaking information out of open AI if you recall that whole November Fiasco with the firing of Sam Altman the qar leaks which have been confirmed to be true by the way we still don't know what qar is but the leaks were real well apparently some of the researchers behind some of the leaks we don't know specifically which ones they have been found and fired Leopold Ashen brener and pavl ismo so we're not sure what they leaked but it seems like they were working on AI safety pava was also working on reasoning as well as AI safety do you think there's a chance that these people have links to some shadowy organization that is really against AI so the information posted their pictures here Leopold and Powell and of course it seems like they have ties to the effective altruism movement all right but to really understand what's Happening Here we have to talk about effective altruism EA as it's sometimes referred to what is effective altruism so couple quick disclaimers first of all I don't know eff of altruism as well as I should so I am relying on some of this information that I find on the internet some of it may be inaccurate if I'm off about something I'll try to post Corrections in the comments or do a follow-up video but also at the same time I think it's it's very difficult to understand exactly what this thing is because I think while maybe it started as one thing maybe even an altruistic thing what it kind of morphed into I think is very different and as far as I can tell all of them are very secretive about what they do what their goals are it's really difficult to figure out what it is that they actually want not their stay mission of quote unquote help Humanity but their actual mission that they're trying to do what's the thing that they're trying to accomplish so it started in 2011 Peter Singer Toby or remember Toby or and William mcll and sort of their stated mission was using evidence and reason to figure out how to benefit others as much as possible and taking action on that basis so basically they wanted to think about how to help Humanity in the best way possible kind of Take the Long of view and kind of go about it in a reasonable kind of scientific method that's as best as I understand it which explain like that I would say hey yes this is a good group and I kind of share those beliefs as well we should try to help everyone and focus on the long term and think about how to do so with evidence and reason again the stated mission is not the problem here in fact here's William mcll a moral philosopher at Oxford right so he has a book what we owe the future here's Elon Musk saying worth reading this is a close match to My Philosophy so Elon Musk a number of years ago said hey this this sounds like a good idea right which again on the surface sounds like a good idea helping Humanity going about it in an intelligent way thinking in longterm versus short-term right here's Stephen Mark Ryan saying should be a good read will did a super fascinating podcast with Tim Ferris close to a decade ago really got me thinking I just realized I remember when I was when Tim Ferris published his first ever podcast I think he was very nervous about doing a podcast so he really hit the wine very very hard during that podcast yeah so it kind of went off the rails towards the end there but yeah it was close to a decade ago actually now it has been a decade and I feel very very old but the point I'm trying to make is it's important to understand that there's what we say we want and then there's what we actually do right so I'm sure we all have a Spam box full of various emails promising us wonderful things that at face value yeah maybe we do want they promise Fortune Fame and at ation so the headline is good but the final result is you having to dispute various credit card charges because you've been defrauded effective altruism started with a good headline Let's Help the world as much as possible how did it end well it started with Sam bankman freed the founder of FTX I haven't followed that too closely but it sounds like he defrauded the various crypto investors sounds like they're missing billions of dollars this is an article from wired effective altruism is pushing a dangerous brand of AI safety this philosophy supported by Tech figures like Sam bankman freed fuels the AI research agenda creating a harmful system in the name of saving Humanity so Sam Beckman freed is in jail or I guess federal prison technically he's not having a good time there his lawyers are arguing that he should have a reduced sentence because he's uniquely vulnerable to the dangers of prison being autistic he has a hard time picking up on certain social cues that are you know very important to survival in a place like that which by the way I'm sure is 100% true I do not doubt that claim however the lawyers are asking for his sentence to be reduced to 5 years and I really doubt that that's going to work so this was the opening ey board in November 2023 when that whole Fiasco happens so we have Adam D'Angelo so still on the board as of right now founder and currently running quora so if you've been hearing a little bit more about po their little chatbot that um I believe is running anthropics technology now I think they've used both open Ai and anthropics Claud to run po if I recall correctly but he's still there then we have Ilia Suk who has been strangely silent since the whole thing we don't really know where he is then Tasha mcau and Helen toner so we think Helen toner is the one behind a lot of this there was a paper that she wrote criticizing openi how openi handled some of the releases that might have created a clash with Sam Alman and that's the thing that kind of led to this whole thing and Helen toner is part of the effective altruism Community during the whole open AI board cush they refused to talk about what was happening even though they were getting calls from the attorney general in fact the same attorney general that I think put Sam bankman freed away called them they had a two-hour long conversation again this is based on some of the leaks that we were seeing from open AI they refused to expand on what was happening they were still very secretive they didn't want to get any information out there and eventually that board was kicked out Sam mman came back to run open Ai and to this day I don't think they ever explained what they were doing for what reason they put out a statement that had some words strung together but didn't have any actual data they didn't have an explanation of is just like we regret the occurrence of the blah blah blah but it didn't say anything right I think this is the statement they put out they're saying opening admissions to ensure artificial general intelligence benefits everyone and the board has to prioritize this Mission accountability is important it's even more important for AGI we hope this happens as we told the investigators deception manipulation and resistance to thorough oversight should be unacceptable and yet they themselves don't seem to be very open in what their con concerns were what actually happened so for all their talk of accountability they're not really accounting for their own actions it seems to me like based on what I've seen I just haven't found anywhere where they talk about what their motivations are here's Toby or one of his books he's again one of the co-founders of this movement effective acceleration so this is posted by David Z Morris he's saying or is an unabashed advocate for unified Global government who decides what's an Extinction risk and who the hell decides exactly how much is necessary Extinction risk and this is uh from Toby ord's book another promising Avenue for incremental change is to explicitly prohibit and punish the deliberate or Reckless imposition of unnecessary Extinction risk international law is the natural place for this as those who impose such risk maybe well National governments or heads of state who could be effectively immune to Mere national law so it seems like what these people want to create is a unified Global government that is able to punish democratically elected heads of state of governments if they perceive what they're doing to be an Extinction risk whatever that means like how do you define what's an Extinction risk who gets to decide right I mean this seems to me like it would give them Absolute Power to jail anyone heads of state people running the country hopefully elected democratically to just put them away to remove them from the post or put them in jail with no explanation other than you pose an Extinction risk so going back to Ashen brener and ISM of Ashen brener graduated from Columbia University and has previously worked at the future fund a fund started by the former FTX Chief Sam bankman freed again that's the guy that's in jail and has his team of lawyers actively trying to reduce that sentence but that fund was aimed to finance project to improve Humanity's long-term prospects then year ago Asen brener joined open AI right and several of the board members who fired Alman had also ties to effective altruism Tasha makali is a board member of effective Ventures parent organization of the Center for Effective altruism and Helen toner previously worked at the effective altruism focused open philanthropy project and of course both left the board when Alman returned as CEO so this is vitalic butterin so he is the guy behind ethereum ethereum has for most of the time been the number two biggest and most successful cryptocurrency after Bitcoin I don't track that stuff too closely nowadays but it used to be I think it's fair to say that most of the time it was number two it probably is now yeah I figured I checked so yes it's number two and this is Max tigar future of Life Institute so another person that seemingly sort of associated with EA CU future of life and EA seem linked so in May 2021 metallic butterin Burns 80% of his ship holding and uses the remaining 24 long-term charitable causes so shibba Inu is one of those crazy doggy cryptocurrency it doesn't really matter the point is he gives a lot of money to the future of Life Institute we're talking to the tune of 755 million so not quite a billion but uh still quite a bit future of Life Institute uses FTX so Sam bankman freed his company that defrauded investors out of billions it liquidates the ship tokens so it sells it basically converts it into dollars I assume and they use that money to create the valal butterin fellowship and AI existential safety everyone Pats themselves on the back the future of Life butterin shibba enu Community sang B freed Alam research right here's Max tear vitalic then November 2022 whoops the collapse of bet sang bankman freed FTX and Alam research due to fraud allegations boy they got so lucky that they cashed out sounds like they were able to get their money out in time the future of Life Institute uh posts for the EU in the transparency register listing musk Foundation as the top contributor of you know 3 million it looks like but where's the nearly a billion dollars from shibba Ino well it lacks that amount the amount they liquidated from the 2021 Shiba Inu donations since the audit is still in progress and the yearly budget presents musk Foundation ation donation as The prominent one right so the 3 and5 million from musk is listed as the top one not the close to a billion dollars from shibba Ino then of course we have that PA AI experiments the open letter right everyone points to musk as the person that funded the foundation right then the update in 2023 the donation listing the 600,000 minor cryptocurrency I guess it went down in value since the donation in 2023 future Life Institute participates in the UK AI safety Summit te Mark addresses the US Congress and the EU AI act this past they've pushed it through allowing for the regulation of general purpose AI systems here's an interview in one of the future of Life institutes co-founders talking about how they view protecting the world from AI basically by making the hardware illegal and subjecting you know software the code that people write to pervasive surveillance on a global scale take a listen I do think that governments certainly governments can make things illegal well you can make Hardware illegal you can also say that yeah producing graphics cards above certain capability level is now illegal and suddenly you have like much much more Runway as a civilization do you get into a territory of having to put surveillance on what code is running in a data center yeah I mean regulating software is much much more harder than Hardware if you like let the mor law to continue then like the surveillance has to be more and more pervasive so my focus for the foreseeable future will be on kind of regulatory interventions I'm kind of like trying to educate lawmakers and kind of helping and perhaps hiring lobbyists to try to make the world safer now the future of Life Institute has a new grant program for Global governance mechanism and institutions he wants to ban the creation agis and have various surveillance mechanisms and this year future of Life Institute tells Politico that its efforts support Common Sense regulations but what they're talking about is Banning gpus these Nvidia cards above a certain capacity those should be made illegal what software people rights should be surveilled and if you also add to that fact the idea that or one of the co-founders of effective altruism is talking about having some sort of a global government that's above heads of state above government that's able to jail people for you know creating existential risks which again is very vague They Don't Really Define what an existential risk is they don't really talk through why they think AI might kill everyone but it seems that they're just pushing for regulation for having political power Global political power my spam boox is full of very attractive sounding headlines but in reality what they want is to rip me off and take my money same Sam bankman freed and the FTX thing they wanted to help everyone get wealthy and help the world but ended up just ripping everybody off and losing billions of dollars of investor funds now these people are saying that they want to save us from certain Doom certain Extinction from AGI effective ACC ation wants to help Humanity right that's the headline what is the actual thing that's going to happen there is this funny commercial that was made by FTX for the Super Bowl it was funny then it's hilarious now because it featured Larry David and Larry David was a skeptical character he dismissed major innovations that happened throughout history like the wheel the fork the toilet and now he's dismissing the cryptocurrency exchange FTX the whole point of the commercial is Don't Be Like Larry invest with FTX what's funny here is we should be like Larry and I don't mean the real person Larry David who himself sounds like lost a whole bunch of money on crypto cuz his salary was in crypto they paid him in crypto can you believe that I'm talking about be like Larry this mythical person that can smell the BS when he sees it your email spam box full of women that want to meet you is a good headline but it's fraud they just want to get your money companies like FTX that say that they want to make you rich it's fraud they just want to take your money the people that are saying that they want to protect you from extinction by this scary software say it with me it's fraud do you want to install a global governance mechanism ban and jail anyone that disagrees with them probably because they believe that they can install themselves at the very top and become the absolute Kings of the world I hate to break it to you but these aren't the good guys now I have to say here so in regards to vitalic butterin I was kind of surprised that he was caught up in this he didn't strike me as one of those people and maybe this is me being naive maybe this is me being a little bit too trusting but to me the jury's still out on this guy and he posted this image which I thought was excellent I try to do my best to not go all in on any specific view I like to be a little bit more neutral I I have my biases I have my opinions I have my preferences but I think now more than ever it's important to try to understand the different opinions the different sides you can have your preference but at least understand where the other side is coming from one view is the anti- technology view it's this idea that safety is behind and dystopia ahead and there's quite a number of people that kind share this view certainly the people that we've talked about today seem to see it in this fashion or at least they say say they do right dystopia ahead right AI will kill everyone AI will turn us all into paper clips some people are saying it won't just destroy humanity and Earth but our entire universe will take over and turn into paper clips or whatever other scenario they Envision and that safety is is behind us we have to kind of stop technological progress decelerate learn to live with less right less food less Comfort less air conditioning and and move backwards into time a lot of these beliefs kind of also overlap with this idea of depopulation right this is one thing that Elon Musk kind of rails against he's saying no we need more people more kids we need the Next Generation and he's kind of fighting against the forces they saying no we need less people we need to reduce Earth's population and by the way if you're not following some of this these are real conversations that that some people are having including people that wield a lot of political power a lot of influence a lot of capital but that's the anti-technology view right Outlaw gpus and have a global worldwide surveillance on software because if we keep going down this path there's Doom ahead right and there's the accelerationist view that there's dangers behind and Utopia ahead so right now we're seeing a lot of progress with AI for example doing drug Discovery there's more and more overlap between genomics and AI so potentially we could cure some heart to cure diseases we can have people live longer we can have more targeted drugs that that help people heal without the side effects we're potentially could be seeing our first commercially viable fusion power plant that will make energy very cheap people are talking going to colonize other planets kind of removing the risk of potentially being just on a single planet as a species or one unlucky meteor can take everyone out potentially right so these people view advancing technology as the right way and slowing down and letting the crippling regulations and these World governments ruled by people that maybe we don't agree with on everything I think we can say that maybe those are the dangers the authoritarian governments worldwide surveillance Etc and then we have the third system and that is what vitalic butterin is saying that's my view that there's dangers behind and multiple paths forward ahead some good some bad and this at least I can kind of agree with the path forward has wonderful amazing promises it has some dangers potentially but I'm going to be 100% honest and I'll come out and say this the people with this Viewpoint scare me the most the people that want to install a global authoritarian surveillance regime that is bigger than governments in order to protect us from something vague that they can't even fully describe that scares me because even if they are sincere and they're good people and they're super duper nice and they want the best for people well the next generation that takes over may not be and eventually we're going to run into somebody that's going to use it for something bad and at that point it will be too late to do anything about it but back to italic my techno optimism this blog post that he wrote it is Big it's very very very it's huge it's pages and pages and pages of notes and bullet points and various uh charts and graphs and whatever its table of contents is like a page long his post also mentions mark andreon as one of the faces behind techno optimism the people that believe that technology that AI will help the world he is by the way one of the main guys behind a16z and reent Horowitz they wrote this techno Optimist Manifesto on the a6z website and they believe that advancing technology is one of the most virtuous things that we can do they believe in ambition aggression persistence relentlessness strength they believe in Merit and achievement they believe in Pride confidence and self-respect when earned they believe in free thought free speech and free inquiry they believe in the actual scientific method and the enlightenment values of free discourse and challenging the authority of experts they believe in as Richard fan says science is the belief in the ignorance of experts and I would rather have questions that can't be answered than answers that can't be questioned and they have enemies and I quote we have enemies our enemies are not bad people but rather bad ideas those enemies Go by different names existential risk degrowth their enemy is stagnation corruption regulatory capture their enemy is speech control and thought control they're saying our enemy is deceleration degrowth depopulation the nihilistic wish so trendy among our Elites for fewer people less energy and more suffering and death so I might go back and read the metallic post try to understand where he's coming from but a quick AI summary that I did makes it seem that he is indeed somewhere in between he is in fact somewhere here he believes that there are specific dangers ahead specific very good paths ahead and of course this bear behind us means that he believes that technology should Advance he believes that AI should grow with humans that we should be integrated with AI it has some pretty Nuance takes on these whole ideas of what EA is what eak is so eak is of course effective acceleration so in that uh Andre and Horowitz a16z their patron saints of techno optimism I mean the first person on there and I think also the second is one of the leaders of that effective acceleration or eak movement so to me I think vitalica is trying to be very nuanced in a very polarized world I think he's somebody that thinks pretty deeply about this stuff but I just can't see him as an anti-technology person we believes that technology is amazing and there are very high costs to delaying it there's this interesting chart he posted so it's kind of like the different quadrants on the right you have agis coming soon on the left not very soon and down you have you know the risk of P Doom so all feature value likely to be destroyed by misaligned AGI if you're an AI Doomer or not basically and towards the top it's unlikely that AGI will destroy everybody and uh they're saying here this is not serious it's just guesswork where everybody is but you can see Sam Alman and he's saying AGI is coming soonish and it's unlikely that it's going to destroy everybody you have the founder of Google up there like it's highly unlikely that it's going to destroy us right they're very very positive about it of course at the very bottom you have a owski probably the most well-known AI Doomer you know Demi s habis who is part of Google deep M who they've placed into you know more of the you know let's say he's cautious he's a little bit towards yeah there could be problems like we have to be careful Yan Lon is very positive Andrew a very positive that it's not going to destroy us interestingly Gary Marcus is highly on here but he tends to think that AI is not going to be very effective and again a lot of this is just guesswork it's not serious in any way but looks like metalic placed himself in the category that egi is not coming anytime soon and it's unlikely to destroy so he's not concerned but he thinks that there's a chance he's maybe a little bit concerned he places his P Doom so the probability of something horrible happening existential risk as 0.1 he's saying you don't have to buy the story but in my opinion it's worth worrying about and he's saying his philosophy is Dak d/ ACC and a podcast on bankless he talks about Dak what it stands for so the D is defensive or kind of accelerating but defensively so carefully but also kind of stands for decentralization as in getting away from one potentially authoritarian government or some Central system pulling the strings of everything and everybody so I'll post a survey down below somewhere that will allow you to vote to kind of show where you are on this whole thing do you think we should accelerate technology as much as we can accelerate AI because there's more danger in slowing down than there is in accelerating are you more in line with the whole world government controlling everything surveilling everything and just giving them Absolute Power because only they can protect us from death by AI I mean I'm sure there are some people that believe that or do you think that maybe we do need to accelerate but defensively cautiously maybe you're somewhere in between let me know I'm curious where people fall on that spectrum because I think these questions are going to be more and more relevant as you can see there are well-funded organizations that are trying to push through these regulations they've succeeded in the EU and they're trying here in the US as well they want to control all forms of software anything they could use neural Nets they want to control search engines or anything that predicts the demand Supply price cost or Transportation needs of products or Services their powers are said to be very open-ended so not a Ru making process or a due process just give them all the power and they will protect you over and over the legislation has this oneway ratchet Clause the administrator has the freedom to make rules stricter without any evidence but has to prove a negative to relax any rules so easy to gain more power but hard to give any of it up no open- Source software if it doesn't get a government okay it cannot be continued if you buy sell gift receive trade or transport even one covered chip like an Nvidia card that is covered under this this act well then you have committed a crime and this Frontier artificial intelligence system administration can straight up compel testimony and conduct raids for any investigation or proceeding including speculative proactive investigations there's a massive criminal liability section not just for people you know doing the math and doing the AI but also any officials who don't do their jobs and here's the kicker emergency Powers the administrator of this organization that they plan to create can on his own authority can shut down the frontier AI industry for 6 months or I'm reading it here as 60 days unless they are confirmed by the president or Congress and then that can extend it to one year they can take full possession and control of specified locations or equipment related to Ai and the administrator can conscript troops so you can basically raise an army to fight the nerds that are putting together various AI software also of course all other agency have to consult this agency if they're doing any AI enforcement stuff they amend the antitrust law to give the administrator a near veto on AI mergers and they can use whatever funding they can get their hands on including the fines imposed and donations so wherever you are in the world I think you should figure out where you stand on these policies on AI safety versus Tech optimism Who are the the good guys who are the bad guys you should decide otherwise the decision will be made for you with that said my name is Wes R and thank you for watching
Info
Channel: Wes Roth
Views: 45,633
Rating: undefined out of 5
Keywords:
Id: j4sYtMpetoc
Channel Id: undefined
Length: 29min 38sec (1778 seconds)
Published: Sun Apr 14 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.