Why Eliezer Yudkowsky is Wrong with Robin Hanson

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
and one of the moves that often AI people make to to spin scenarios is just to assume that AIS have none of that problem AIS do not need to coordinate they do not have conflicts between them they don't have internal conflicts they do not have any issues in how to organize and how to keep the peace between them none of that's a problem for AIS by assumption they're just these other thing that has no such problems and then of course that leads to scenarios like then they kill us all welcome to bankless where we explore the frontier of Internet money and internet finance and also AI this is how to get started how to get better how to front run the opportunity this is Ryan Sean Adams I'm here with David Hoffman and we're here to help you become more bankless guys we promised another AI episode after an episode with Ellie easier well here it is here's the sequel the last episode of Ellie zuridkowski we titled correctly we're all gonna die because that's basically what he said I left that episode with um a lot of miscuits essential dread yeah existential dread like uh it was not good news in that episode and I was having a difficulty processing it but Dave and I talked and we knew we had to have some follow-up episodes to tell the full story bankless style and go on the Journey of AI its intersection with our lives with the world and with crypto so here it is this is the answer to that this is Robin Hansen on the podcast today let me go over a few takeaways number one we talk about why Robin thinks Eliezer is is wrong we're not all gonna die from artificial intelligence but we might become their pets number two why we're more likely to have a civil war with AI rather than being eaten by one single artificial intelligence number three right why robin is more worried about regulation of AI than actual AI very interesting number four why alien civilization spread like cancer this is also related to Ai and super interesting number five finally we get to what in the world does Robin Hansen think about crypto David why was this episode significant for you Robin Hansen is such a great thinker he's uh absolutely a polymath and really like Eliezer progresses in his thoughts in the very linear logical fashion so he's easy to follow along with and so the first half of this episode maybe the 45 minutes 50 minutes is all about just the AI alignment debate and Eleazar versus Hanson which is a debate that has actually been going on for for many many years now this is not the uh over a decade yeah you're right this is not the first time that Eliezer has heard about Robin Hansen or Robin Hansen has debated Eleazar this is this is an ongoing saga uh and so this is uh just course material for for Robin Hansen and so we really focus on this AI alignment problem and how these thinkers think that AI will develop and progress here on planet Earth and how they will in Friendly or unfriendly ways ultimately collide with Humanity so that's the first half of this episode the second half of this episode I think is is when this gets really really interesting if you just listen to the first habit this episode you would just think like oh this is the other half of the conversation to the AI debate which it is the second half connects this to so many more rabbit holes and so many more topics of conversation that are actually I would say deeply ingrained to bankless content themes uh the themes of competition versus coercion the themes of exploring Frontiers uh the thing of of moloch and the prisoner's dilemma and how or things coordinate across across species and so uh the he we connect AI alignment to Robin hans's famous idea that he calls grabby aliens if you haven't heard about grabby aliens you're in for a treat so this goes from what is a simple counter argument to a debate that we've had to a multi-faceted exploration uh that is just so cursory of many very many many deep subjects that I hope to explore further on bankless yeah and honestly David I'm dying to record the debrief with you because I want to get your take on this episode that was in con you can see how giddy I was in the second half of the episode I know and I want to contrast it with with our Ellie user episode and how these two thinkers think and who do you think has the stronger case the debrief episode is the episode Dave and I record after the episode where we just talk about what just happened give our raw unfiltered thoughts so we're about to record that now if you are a bankless citizen then you have access to that right now if you'd like to become a citizen click the link in the show notes and you'll get access to our premium RSS feed where you'll have access to that also this episode will become a collectible next Monday collecting this episode so hard me too I've got that laser episode my uh collections I'm also collecting this we release episode collections for our key episode of the week every Monday the mint time is 3 P.M Eastern and whatever time zone you're in you have to convert that uh that's it we're gonna get right to the episode with Robin Hansen but before we do we want to thank the sponsors that made this possible including our favorite crypto exchange Kraken our recommended exchange for 2023 3 go set up an account Kraken has been a leader in the crypto industry for the last 12 years dedicated to accelerating the global adoption of crypto Kraken puts an emphasis on security transparency and client support which is why over 9 million clients have come to love kraken's products whether you're a beginner or a pro the Kraken ux is simple intuitive and frictionless making the kraken app a great place for all to get involved and learn about crypto for those with experience the redesigned Kraken Pro app and web experience is completely customizable to your trading needs integrating key trading features into one seamless interface Kraken has a 24 7 365 client support team that is globally recognized Kraken support is available Wherever Whenever you need them by phone chat or email and for all of you nfters out there the brand new Kraken nft beta platform gives you the best nft trading experience possible Rarity rankings no gas fees and the ability to buy an nft straight with cash does your crypto exchange prioritize its customers the way that Kraken does and if not sign up up with Kraken at kraken.com bankless arbitrim1 is pioneering the world of secure ethereum scalability and is continuing to accelerate the web 3 landscape hundreds of projects have already deployed on arbitrum 1 producing flourishing defy and nft ecosystems with a recent addition of arbitrum Nova gaming and social adapts like Reddit are also now calling arbitrim home both arbitrim1 and Nova leverage the security and decentralization of ethereum and provide a builder experience that's intuitive familiar and fully evm compatible on arbitrum both Builders and users will experience faster transaction speeds with significantly lower gas fees with arboretum's recent migration to arborstone Nitro it's also now 10 times faster than before visit arboretum.io where you can join the community dive into the developer docs Bridge your assets and start building your first app with arbitrum experience web 3 development the way it was meant to be secure fast cheap and friction free learning about crypto is hard until now introducing metamask learn an open educational platform about crypto web 3 self-custody wallet management and all the other topics needed to onboard people into this crazy world of crypto metamask learn is an interactive platform with each lesson offering a simulation for the task at hand giving you actual practical experience for navigating web 3. the purpose of metamask learn is to teach people the basics of self-custody and wallet Security in a safe environment and while metamask learn always takes the time to Define web 3 specific vocabulary it is still a jargon free experience for the crypto curious user friendly not scary metamask learn is available in 10 languages with more to be added soon and it's meant to cater to a global web3 audience so are you tired of having to explain crypto Concepts to your friends go to learn.menimask.io and add metamask learn to your guides to get onboarded into the world of web3 bankless nation we are excited to introduce you to Robin Hansen he is a professor of Economics at George Mason University in a research associate at the future of humanity Institute at Oxford this is takes an interdisciplinary Research Center approach that investigates big picture questions about humanity and its prospects and I think explaining exactly who Robin is and what he's doing is not a trivial task because he's a he's a polymath certainly spans many things he's provided many different mental models across various um like I don't know various disciplines but I would not call him conventional by any means and I'm sure bankless listener you you will see what we mean here today Robin welcome to bankless glad to be here I think I can try to explain the kind of weird that I am yeah go ahead oh please that's because I can't explain the kind of weird ideas so I think I'm conventional on methods and weird on topics hmm so I tend to look for a neglected important topics and where I can find some sort of angle but I'm usually looking for a pretty conventional angle that is some sort of usual tools that just haven't been applied to an interesting important topic so I'm not a radical about theories or methods so use things like science and math and statistics and all of those normal non-radical things right I've spent a lifetime collecting all these usual tools all these systems really and I'm more of a polymath in that I'm trying to combine them on neglected important topics so if you go to a talk where everybody's arguing and you pick a side I mean the chances you're right are kind of small in the sense that there's all these other positions and you know maybe you'll be right but probably you'll be wrong because you're picking one of these many positions right if you go pick a topic where nobody's talking about it you just say anything sensible you can probably be right and we we've I think recently ran into somebody uh who follows that that path of sorts uh somebody who thinks very logically and rationally but is applying it to uh unique more unique frontiers of the place that humanity is uh and that is uh our recent episode with Eliezer who followed a decently logical path that was relatively easy to follow that unfortunately led us into a dead end for like Humanity uh and so it was uh something that uh bankless that me and Ryan as co-host of this podcast but then also many of the listeners uh felt trouble with because we Eliezer was able to guide us in a very simple and logical path on over onto the brink and so we're hoping to continue that conversation with you Robin uh as well as be able to explore blur explore some New Frontiers yeah Robin I I'm just wondering if we could just Wade right into the deep end of the pool here because what happened is basically user came on our podcast we thought we were going to talk about Ai and safety and Alignment all of these things you know talks about that a lot and we thought we were going to tie that to crypto what ended up happening Midway through that podcast Robin is I got an existential crisis so did David the rest of the agenda seemed meaningless and unimportant because here's Eliezer telling us basically that the AI was imminent he didn't know whether it would happen in two years and five years and 10 years in 20 years but he knew the Final Destination which is that AIS would kill all of humanity and that we didn't have a chance and basically and I'm not being hyperbolic here Robin I know you haven't had a chance to go through that episode but he basically says you know kissed your spend time with your loved ones because you do not know how much time you actually have and so this left like me and I think many bankless listeners on kind of a cliffhanger of like oh my God are we all gonna die and David tried to talk to me after that episode he's like Ryan it's okay like you know but but we knew we also had to like find someone who could give us another interpretation of what is going with on with AI and Robin we have chat gbt for it looks incredibly sophisticated it looks like it's advancing at Breakneck speed and we're worried about the scenario so when Ellie easier calci says we're all going to die what do you what do you make of that do you think we're all going to die so AI inspires a lot of creativity regarding fear um and I think honestly most people as they live their lives they aren't really thinking about the long-term trajectory of civilization and where it might go and if you just make them think about that I think just many people are able to see scenarios they think are pretty scary just based on you know projection of historical Trends toward the future and things changing a lot so I want to acknowledge there are some scary scenarios if you just think about things that way and I want to be clear what those are but I want to distinguish that from the particular extra sphere you might have about AI killing us all soon and I want to describe the particular scenario Ellie Eiser has in mind as I understand it as a very particular scenario where you have to pile on a whole bunch of assumptions together to get to a particular bad end and I want to say those assumptions seem somewhat unlikely and piling them all together makes the whole thing seem quite unlikely but nevertheless you just think about the long-term trajectory of civilization it may well go places that would scare you if you thought about that and so that'll be the challenge for us to separate those two so which one would you like to go with first I would like to start with understanding what you think his assumptions are and maybe all right maybe starting let's do that Okay so the scenario is you have an an AI system like some coherent system it's got an owner and Builder people who sponsored it who have some application for it who are watching it and using it and testing it and um you know the way we would do for any ideas system right there's an system and then somewhere along the line the system decides to try to improve itself now this isn't something most AI systems ever do and people have tried that and it usually doesn't work very well so usually when we improve AI systems we do it another way so we train them more and more data give them more Hardware use a new algorithm but the hypothesis here is we're going to train this system is going to be assign the task figure out how to improve yourself and furthermore it's going to find a wonderful way to do that and the fact that it found this wonderful way makes it now special compared to all the other AI systems so this is a world with lots of AI systems this is just one it's not the most powerful or the most impressive or interesting except for this one fact that it has found a way to approve itself and this way that it can improve itself is really quite remarkable first of all it's a big lump so most Innovation most improvements in all technology is lots of little things you gradually learn muscle things and you get better once in a while we have bigger lumps and that scenario here there's a really huge law and this huge lot means the system can all of a sudden be much better at improving yourself then not only it could before but in essence then all the other systems in the world put together it's really quite an achievement this this lump it finds and a way to improve itself and in addition this way to improve itself has two other unusual features about Innovations first it's a remarkably broad Innovation applies across a very wide range of tasks most Innovations we have on how to improve things are relatively narrow they'll let it improve in a narrow range of things but not over everything this Innovation lets you improve a really wide range of things and in addition most Innovations you have let you improve things and then the improvements run out until you'll find some other way to improve things again but this Innovation doesn't run out it allows this thing to keep improving more for many orders of magnitude you know maybe 10 artists Max or something like it's just really a huge Innovation that just keeps last just keeps playing out it just keeps improving it doesn't run into errors while it improves itself things that Slow Down slow it down and get stuck for a long time it just it just keeps working right okay and whatever it does to pursue these Innovations these you know self-modifications will change it they probably will change its software configuration maybe that's relative use of resources the kinds of things it asks for how it spends its time and and money that it has doing things the kind of communication it has you know it's changing itself and it's owners Builders the ones who are you know sponsored it and made it and have have uses for it they don't notice this at all it is vastly improving itself and its owners is just oblivious now initially it's just some random obliviousness now at some point the system will get so capable maybe you could figure out how to hide its new status and its new trajectory and then it might be more plausible that it succeeds at that if it's now very capable of hiding things but before that it was just doing stuff approving itself and its owner managers were just oblivious either they saw some changes they didn't care uh they they misinterpreted the changes they had some optimistic interpretation of where that could go but basically they're oblivious so if they knew it was actually improving enormously they could be worried they would could like step it maybe pause it try variations try to study it and make sure they understand it but they're not they're not doing that they are just oblivious and then the system reaches the point where it can either hide what it's doing or just rest control of itself from these owners builders and in addition like if it want to rest to control itself presumably they would notice that but then and they might try to retaliate against it or recruit other powers to to you know lock it down but by assumption it's at this point able to resist that it is powerful enough to either hide what it's doing or just rest control and resist attempts to control it at which point then it continues to improve becoming so powerful that it's more powerful than all the other everything in the world including all the other AIS and then soon afterwards um its goals have changed so during this whole process I mean two two things have to have happened here one is it had to become an agent that is most AI systems aren't agents they don't think of themselves as I'm this person in the world who has this history and these goals and this is my plan for my future you know there are tools that do particular things some are along the line this one became an agent so this one says this is what I want and this is what who I am and this is how I'm going to do it and in order to be an agent that needs to have some goals and during this process by which it improved at some point it became an agent and then that's at some point its goals changed a lot not just a little in effect now so any system we can think in terms of its goals if it takes actions among a set of options we can interpret those actions as achieving some goals versus others and for any system we can assign it some goals although the range of those goals might be narrow if we only see a range of narrow actions uh so we might not be able to interpret goals more generally so if we have an AI system that you know is a taxi driver will be able to interpret the various routes it takes people on and how carefully it drives in terms of some overall goals respect to how fast it gets people there and how safely it does but maybe we can't interpret those goals much more widely as what would it do if it were a mountain climber or something because it's not climbing mounts right but still with respect to a certain range of activities it had some goals and then by assumption basically in this period process of growing its goals just become ineffect radically different and then by assumption radically different goals through this random process are just arbitrarily different and then the final claim is arbitrarily different goals when they look at you as a human you're mostly good for your atoms you're not actually useful for much anything else at some point and then you are recruited for your atoms I.E destroyed and that's the end of the scenario here where we're all we all die so to recall the set of assumptions we piled on together we have an AI system that starts out with some sort of owner and Builder it is assign the task to approve itself it finds this fantastic ability to improve itself very lumpy very broad or works over many orders of magnitude it applies this ability its owners do not notice this for many orders of magnitude of improvement presumably or when it happens really really quickly potentially well that that would be presumably the most likely way you can imagine the owner's not noticing perhaps but the fundamental thing is the owners don't notice if it was slow on the others didn't bonus though the scenario still plays out uh you know so the key reason we might postulate fast is is just to create the plausibility that the owners don't notice um because otherwise why would they notice um so you know but that's also part of like the the size of this Innovation right um that is or we're already improving AI systems at some rate and so if we're gonna if this new method of improvement was only going to improve AI systems at the rate they're already improving then this AI system won't actually Stand Out compared to the others right in order for this to stand out it'll have to have a much faster rate of improvement to be distinguished from the others and this will then have to be substantially faster right so that would set the time scale there for what it would be to to be in the scenario so it both needs to be faster than the rate of growth of other AI systems at the time substantially and fast enough that the owner Builders don't notice this radical change in its agenda priorities activities they're just not noticing that and then they don't notice it on to the point where this thing acquires the ability to become an agent have goals hide itself or you know free itself and defend itself and then the last assumption and its goals radically change even that it was friendly and Cooperative with humans initially which presumably it was later on it's nothing like that it's just random set of goals at which point then by assumption now it kills us all so the question is how plausible are all those assumptions and I so we could walk through analogies and prior Technologies and histories in the last few centuries and I think film Advocates like Ellie Eiser will say yeah this is unusual compared to recent history but they're gonna say recent history is irrelevant for this this is nothing like recent history the only things that are really relevant comparisons here say that you're right you know the rise of the human brain and maybe the rise of life itself and everything else is irrelevant so then they will you know reject other recent few centuries technology trajectories as not relevant analogies what did you just call Eliezer Robin a what Advocate a fuma Advocate what is foom form is just another name for this explosion that we've been talking about yeah intelligence explosion yeah I got to describe it gotcha uh kurzweil's stuff like that kind of thing um Singularity that sort of thing okay so well so Singularity is a different concept than Foo okay different content in some sense a foom is a kind of Singularity but not all singularities Robin thank you for guiding us because we're still learning in this right like uh bankless we had never done an AI podcast previously we covered a lot with crypto and coordination economics and now we're doing this AI podcast and I feel like we got just got punched in the face so um you re-articulated while we're here yeah we're walking slower you re-articulating eliezer's assumptions is uh I think very helpful to me and so we want to get to like why you think those assumptions are unlikely uh to be true but but I do think you are right in in the episode with him um he basically sort of painted this uh Fantastical story of these assumptions and he he basically said yeah those assumptions the things that you're describing I think and I don't want to put words in his mouth so maybe this is what I was hearing him say is you're just describing intelligence Robin that's what intelligence does and I'll give you exhibit a it's called human beings and I'll give you the algorithm it's called Evolution gradient descent over uh millions of years and hundreds of millions of years and we end up with a super like an intelligence but relative to maybe the animal kingdom but super intelligence that exerts its dominance and its will will has changed from just procreating and spreading its genes genes and memetic material to something that Evolution would have never The evolutionary algorithm would have never envisioned it actually doing and so I think maybe what I was hearing the criticism would be like we already have an example of this Robin it's called intelligence and it's called humans what do you think about this so as I said um if we just think about the long run future we're in we can generate some scenarios of concern independent of this particular set of assumptions Ellie azer had set up so um you know the scenario where humans arise and then humans change the world I guess you could imagine as the scary to Evolution If evolution could be scared but evolution doesn't really think that way but certainly you can see that in the long run you should expect to see a lot of change and a lot of ways in which your descendants may be quite different from you and have agendas that are different from you and yours that is I think that's just a completely reasonable expectation about the long run uh so we could talk about that in general as your fear I just want to distinguish that from this particular set of assumptions that a word piled on as the food star as a film star it was like something that might happen in the next few years say and it would be a very specific event a particular computer system suffers this particular event and then a particular thing happens that's a much more specific thing to be worried about than the general trajectory of our descendants into the long-term future so which one would you like to talk about I I'm trying to summarize really just the perspective differences here and I know you've gotten you've had this debate with Eleazar before so this is this is like review for you I think um eliezers conclusions is that while the the future is Unwritten and the path of our future can be many and multivariate and we can have different possible outcomes Eliezer is like well all all roads lead to the super intelligence taking over and I think just to summarize your position is like that is a possible path um and it is something to consider and it is still less likely than the many many many other possible paths that are also perhaps an aggregate much more likely is that a fair uh summary of your position so let's talk about this other more General Framing and argument so we could just say in history Humanity has changed a lot not just a little a lot we've not just changed some particular Technologies we've changed our culture in large ways we've changed the sort of basic values and habits that humans have and our ancestors from 10 000 or a hundred thousand or a million years ago if they looked at us and saw what we're doing it's not at all clear they would Embrace us as uh you know they are proud descendants they are proud of and happy to have replaced them but that's not at all obvious uh you know even just in the last thousand years or even shorter we have changed in ways in which we have repudiated many of our ancestors most deeply held values where we rejected their religions we've rejected their patriotism rejected their sort of family Allegiance and family Clan sort of allegiances we we have just rejected a lot of what our ancestors held most dear and that's happened over and over again through a long-term history that is each generation we have tried to chain our train our children to share our culture that's just a common thing humans do but our children have drifted away from our cultures and continue to just be different and you know over a million years humans fundamentally ourselves changed and one of the things that happened is we became very culturally plastic and so culture Now is really able to change us a lot because we are we have become so able to be molded by our culture and even if our genes haven't changed that much well they've changed substantially say in the last 10 000 years our culture has enormously changed us and if you project the same Trend into the future you should expect that this will happen again and again our descendants will change with respect to cultural Evolution and their technology and the structure of their society and their priorities and then of course at some point not to distant future we will be able to re-engineer what we are or even what our descendants are and that will allow even more change that is once we can make artificial Minds for example there's a vast space of artificial lines we can choose from and we will explore a lot of that space and that allows even more big possibilities for our descendants could be different from us so this story says our descendants will become yes super intelligent and yes they will be different from us in a great many ways which presumably also include values and if what you meant by alignment was how can I guarantee that my distant descendants do exactly what I say and believe exactly what I believe in will never disappoint me in what they do because they are fully under my control I gotta go gee that looks kind of hard compared to what's happened in history so now if that's the fear you have I got to endorse that that's not based on any particular scenario of a particular computer system soon and what trajectory of an events it'll go through that's just projecting past Trends into the future in a very straightforward way so then I have to ask like is is that what you're worried about no and that's that is not what I'm worried about that is my base case that like we're going to get more intelligent technology is going to change us culturally it's going to change trajectory yeah but I gotta add one Zinger to this okay what if change speeds up a lot so that this thing you thought was going to happen in a million years happens in a hundred well I mean for me personally I'm a more of a techno optimist so I would be more on the side of like within reason uh of course more embracing of these types of change I know others aren't quite as embracing and and also this was not the scenario at all that LEDs are presented he presented the scenario of not rapid change that you might not like in the future and it could come within your lifetime but actual obliteration of humanity like literally rearranging our atoms for some other artificial intelligence purpose and while you agree with like there will be lots of change as there has been in the past perhaps that change will even accelerate as we we um delve deeper into the kind of the technology that that is in our future you do not think that an AI will simply the super intelligent artificial intelligence will simply obliterate humanity and kind of wipe us from uh from creation entire it'll be it won't be quite as drastic as that look let's be careful about noticing exactly what's the difference between the scenario I presented and the scenario he presented because they're not as different as you might think in both scenarios there's a descendants in both scenarios The Descendants have values that are different from ours and in both scenarios they're certainly the possibility of some sort of violence or you know disrespective property rights such that The Descendants take things instead of asking for them or trading for them um because that's always been possible in history and it can remain possible in the future you know today most change is peaceful lawful and there are of course still big things that happen but mostly it's via trade and competition and if the AIS displace us it's because they beat us fair and square at our usual contest that we've set up by which we compete with each other so this as far as aren't that different I'm trying to point out and then the the key difference here is one is the time scale how fast does it happen another is how spread out is it is there the single agent who takes over everything or are there millions of descendants billions of them who slowly went out and displace us how far do their values differ from ours just how much do they become indifferent to us and then do they respect property rights is this a peaceful lawful transition or is there a revolution or War those are the main distinctions between these two stars we've described eliezer's sorry it was a very fast there's a single agent its values change maximally and it doesn't respect previous property rights whereas the scenario I'm describing is Ambiguously fast hey it could happen much faster than you think of millions or billions of descendants of a perhaps gradual and intermediate level of value difference but substantial but primarily I would think in terms of peaceful lawful change I think there's a a missing component to this conversation that we've been having recently and uh I I understand that there are things about the evolution of this Ai and things that are about the evolution of humanity that are all basically synonymous right there's iteration there's development there's progress and Robin you gave the account for that win uh we raise our kids we try and imbue them with our values and our cultures and there is uh transcription errors in that in that only so much of our values and cultures get passed along to our kids uh and perhaps as technology advances uh even less passes along from generation to generation and our culture changes over time and this is what we call progress and when we go back to the AI innovating on itself uh there you also presented a scenario of uh Improvement errors with that as well like we don't know how perfectly it can improve and so as it develops it changes and adapts and these are are all similar structures and so this is what we know and maybe the time scales throw us off a little bit but these are similar patterns there's one component missing that I'd like to highlight and dive into when we have our generations of kids and Humanity that progresses and even if it changes it still started from us in the first place right there's a logical continuation of parent to kid parent to pit kid parent to kid and so at least starts from a place of continuation I think the problem with this AI alignment and super uh explosion issue is that in the moment that we create this AI it actually doesn't upload our value system because we are creating a completely new life form and so it is not biological life it is not DNA that is growing up to an adult to combine with somebody else's DNA to create a kid who then grows up it's like that isn't being carried forth so in the moment that we create AI it has no trail of evolutionary history to imbue it with values and judgment and how to perceive the world in an aligned fashion and so in that creation moment it is completely Rogue and we don't know how to understand it and it doesn't know how to understand us because it is a completely due new form of life within completely new form of appreciating and understanding values and I think that's the missing component even though there are similarities in how these things progress the uh Bootloader for values and Alignment is missing in this Ai and I think that we haven't touched on yet so I do some work on Aliens now we could talk about that later if you want but I'm quite confident I'm looking forward to that part of the conversation by the way compared to all the aliens out there in the universe and all the alien AIS that they would make the AIS that we will make will be correlated with us compared to them we aren't making random algorithms from the randomly from the space of all possible algorithms and machines that's not what we're making we are making AIS to fit in our world so you know like the large language models made recently the most impressive things those are far from random algorithms to the space of all possible algorithms they are modeling after us and most in the next few decades as we have more AI applications machines will be made by firms trying to make profits from those AIS and what they'll be trying to do is fit those AIS into the social slots that humans had before so they'll be trying to make the AIS like humans in the sense that they will have to look and act like humans well enough to sit in those social slots if you want an AI lawyer it'll have to talk to you like somewhat like a human lawyer would uh and similarly for an AI housekeeper Etc we will be making AIS that can function and act like humans exactly so that they can be most useful in our world and we are the ones making them and so just out of habit we're making them like us in some abstract sets now there's a question of how much like us and then the quest there's the question of well how much did you want and how much is feasible and how really close are your kids anyway or your grandkids uh because just remember how much we humans have changed I think when you look at historical fiction or something it doesn't really come across so clearly we humans have changed a lot and are changing a lot even in the last century uh if you just look at the creative change of human culture and attitudes and styles in the last century project that forward a hundred more centuries you got to be imagining our descendants could be quite different from us even if they started from us and it's interesting in mostly software changes would you say like at the cultural level I mean human Hardware hasn't really recently yes because I mean although we have substantially changed the hardware too but yes most lawful but in the future we will be able to make Hardware changes to our descendants I have this book called the age of M work love and life from robots rule the Earth and it's about brain emulations and so this is where we make very human-like creatures who are artificial using artificial Hardware but then they can modify themselves and become more alien more easily because they can more easily modify their hardware and software as they are basically computer simulations of human brains um so if that happens soon then even that human line of descendants will be able to become quite different in a relatively short time Ryan if you thought the AI alignment problem would uh throw you for it to the I can't wait until we get into the conversation about synthetic biology separating uh humans to some be gods and others not be Gods but that'll be a different podcast uh Robin I think uh in your argument here you uh baked in the uh belief the assumption that robots will these these AIS will adopt our values merely by like osmosis from the devs and the engineers who are coding them up because they will code them up to do certain things and behave in certain ways using characters on our English or or our keyboards for example and just merely by being Association of being created by us we will it's actually impossible to not imbue them with our culture and our values is that what you're saying well there's a big elephant how do you how is it that you think your children are like you I mean they are mainly because they're biological cells not computers though humans are really quite culturally plastic maybe that's another thing people really don't like yet so anthropology has gone out and looked at a really wide range of human cultures and found that humans are capable of Behaving and thinking very differently depending on the culture they grew up in that's the basic result of anthropology there are some rough human universals but mostly we're talking variation the fact that you are diff seem very similar with all the other humans around you is not about sort of the innate human similarity you have it's because you are in a similar culture to them so is it to just re-articulate your position here I think we were saying that Eleazar is perhaps fearful that the super intelligent AI in humans are so far apart that they can never they can never come to coexist and what you're saying is that life as a whole has similarities no matter how it manifests or how it is expressed is that how you would say it I was trying to tell you that your descendants could be really different from you I wasn't trying to convince you that there was a bound on just how different your descendants could get I was trying to show you that in fact your descendants could get really different not through this film star just through the simple default way that Society could continue to change you should if you're going to be scared about the food Snyder maybe you should be scared about that one too we could start to talk about what we might know in general about an intelligent creatures and what might be the common features across them for all alien species through all of space-time or something there probably are some general things they have in common but they might be fewer than would comfort you I I definitely want us to get there but really quick just picking apart the the assumptions that you laid out and I want to see which ones more specifically you might disagree with or state in a different way than Eliezer you said you know assumption one is that the AI improves itself it seems core to what Ellie user thinks assumption two the owners that is the people who program it don't take control don't try to stop it uh assumption three the AI becomes an agent an assumption for the agent's goals change the AIS goals change it ends up destroying Humanity I find some of these harder to like believe than others particularly assumption four like I just I didn't argument the reason that suddenly the AI destroys Humanity like that maybe we could talk about but let's start at the top actually do you have a disagreement with assumption one that um the AI and AI will recursively start to improve itself well remember I tried to break a one into multiple parts to show you that it requires multiple things all to come together there so not only does it try to improve itself it finds this really big lumpy Improvement which has enormously unusual scale in terms of how far it goes before it runs out and scope in terms of how many things it allows the Improvement of and you know magnitude is just a huge get win over previous thing those are all a prior and likely things uh so it's not the track that tries to improve itself that's that seems quite likely sure somebody might well ask a system to try to improve itself but that it would find such a powerful method and then still not be noticed by its owners that gets pretty you know striking as as an assumption I understand and so that's where it's tied into like you find it hard to believe that the owners the creators of this AI wouldn't be able to stop it from doing something nefarious or or devious that that is also a difficult assumption well it's first just noticing that is by assumption this thing starts out at a modest level ability right by assumption this thing is comparable to many many other AIS in the world so by assumption if you could notice a problem early on then you can stop it because you know you can bring together thousands of other AIS against this one to to help you stop it if you want to stop it so at some point later on in this Evolution it may no longer be something you could stop but you know by assumption that's not where this starts it starts at you know being comparable to other AI systems and then it has this one advantage it can improve itself better and then this other does this other assumption than uh what I'd label number three the AI becomes an agent so How likely is an AI to become a self-interested acting agent is that um difficult to foresee well of course some owners might make it that way but most won't so we're still we're we're narrowing down the set here so um my old friend Eric Drexler for example has argued that we can have an advanced AI economy where most AIS have pretty narrow tasks uh they aren't General agents trying to do everything they drive cars to the airport or whatever they each do a particular kinds of task and that's in fact how our economy is structured you know our economy is full of Industries made of firms each of doomed who do particular tasks for us and so a world where those firms are now much more capable and even you know artificially intelligent capable even more than superhuman capable can still be a world where each one does a pretty narrow task and therefore isn't the general agent that would you know do enormous change things if it became more powerful so if you had a system that was really good at route planning say cars to get from A to B if it was superhuman at that it might just be really good at route planning but if that's all it does uh there's it's not plausibly going to suddenly transition to an agent who sees itself as having history and whole goals for the world and and trying to like figure out how to preserve itself and make itself go that's pretty implausible for a route planning AI so in a plausible future most AIS would be relatively narrow and have relativity tasks but sometimes somebody might make more General AIS that had more General scope and Ambitions and purposes and then those might be the basis of a scenario here but the people who created those AIS they would know it's unusual feature they would now this one is an agent and they would presumably take that into account in their monitoring and testing of this thing that's not they're not ignorant to this fact so the scenario whereby the route planning one just accidentally becomes an agent I mean that's logically possible but now we got to say you know how often do Design Systems for purpose a suddenly transform themselves into something that does whole different thing d it happens sometimes but it's pretty rare and so let's say it gets through all of these Gates right we have an AI that improves itself in in Broad ways in ways that are you know somewhat uh lumpy the owners for whatever reason aren't able to take control the AI you know strict them uh in in some way maybe the owners have programmed this AI to become an agent so it's an agent acting and it's free will this this last point then eliezer's conclusion is like the point that was most uh concerning of course is that then this AI comes and destroys Humanity and um you know I think his rationale is basically because why not it would have other purposes for Humanity it would just you know step over them what about this assumption so imagine instead of one AI we have a whole world of AIS who are improving themselves and their values are diverging that's more of a default scenario if that happens in a world of property rights then say humans are displaced and no longer at the center of things we're not in too much demand we basically have to retire humans go off to our retirement corner and spend our retirement savings if that stays a peaceful scenario then the all these AIS who you know change and have other purposes they don't have to kill us they can just ignore us off in the corner spending our retirement savings but if there's a possibility of a revolution say whereby they decide hey why let these people sit in the corner let's grab their stuff so I mean the possibility of a violent revolution has always been there and it's there in the future but in the world we're living in that's a rare thing and that's good and we understand roughly why it's rare so the thing that's happening different in Elias your scenario is because it's the one AI you see it's not in a society where revolutions are threatening it's just the one power and then from its point of view why let these people have their property rights why not take it now I would say that the main thing there is not that it has different goals but that it's singular and therefore not in a world where it needs to keep the peace with everybody else and be lawful for fear of offending others or the retribution that it can just go grab whatever it wants that's the distinctive feature of the scenario he's describing in a more decentralized scenario again I think there's much more hope that even if ai's displace us even if their goals become different from us they could still keep the peace because plausibly they could be relying on the same legal institutions to keep the peace with each other as they keep with us and that's in some sense why we don't kill all the retirees in our world and take their stuff right today there's all these people who are retired and like what have they done for us lately right kill the retirees and take their stuff but we don't why don't why don't we do that well we share these institutions with the retirees and if we did that that would threaten these institutions that keep the peace between the rest of us and we would each have to wonder who's next and this wouldn't end well okay and that's why we don't kill all the retirees and take their stuff not because they're collectively powerful and can somehow resist their efforts to kill them we could actually kill them and take this stuff that would that would actually physically work that's not the part of it that's not the problem with that scenario the problem is what happens next after we kill them takes the stuff who do we go for next and where does this end so a future of AIS who become different from us and acquire new goals and our agents threatens us if they have a revolution and kill us and take their stuff our stuff that's the problem there and so Eli is a solution you see makes that seem more likely by saying there's just the one agent it has no internal coordination problems it has no internal divisions it's just the singular thing and honestly we could add that as another implicit assumption in this scenario he assumes that as this thing grows it has no internal conflicts it becomes more powerful in the entire rest of the world put together and yet there are no internal divisions of note there's no code right it doesn't have different parts of itself that fight each other and that have to keep the peace with each other because that's why we have law and property rights you see in our world is because we have conflicts and this is how we keep the peace with each other and he's setting that aside by assuming that it doesn't need to keep the piece internally because it's the singular thing you know uniswap as the world's largest decks with over 1.4 trillion dollars in trading volume but it's so much more uniswap Labs builds products that lets you buy sell and use your self-custody digital Assets in a safe simple and secure way uniswap can never take control or misuse your funds the bankless way with uniswap you can go directly to defy and buy crypto with your card or bank account on the ethereum layer 1 or layer twos you can also swap tokens at the best possible prices on uniswap.org and you can also find the lowest floor price and trade nfts across more than seven different marketplaces with uniswap's nft aggregator and coming soon you'll be able to self-custody your assets with uniswap's new mobile wallet so go bankless with one of the most trusted names in D5 by going to uniswap.org today to buy sell or swap tokens and nfts the Phantom wallet is coming to ethereum the number one wallet on Solana is bringing its millions of users and beloved ux to ethereum and polygon if you haven't used Phantom before you've been missing out Phantom was one of the first wallets to Pioneer Solana staking inside the wallet and will be offering similar staking features for ethereum and polygon but that's just staking Phantom is also the best home for your nfts Phantom has a complete set of features to optimize your nft experience pin your favorites hide the Uglies remove the spam and also manage your nft sale listings from inside the wallet Phantom is of course a multi-chain wallet but it makes chain management easy displaying your transactions in a human readable format with automatic warnings for malicious transactions or fishing websites Phantom has already saved over 20 000 users from getting scammed or hacked so get on the Phantom waitlist and be one of the first to access the multi-chain beta there's a link in the show notes or you can go to Phantom dot app waitlist to get access in late February so we should really hope for a world of a pluralistic world of many AIS and in fact you think that's a more likely World anyway of course yes so we're already in a world of great many autonomous parts right we have not only you know billions of humans but we have millions of organizations and firms and even Nations and government agencies and one of the most striking features of our world is how it's hard to coordinate among all these differing interests and organizations and one of the most striking features of our world are the mechanisms we use to keep that piece and to coordinate among all these Divergent conflicting things and one of the moves that often AI people make to spin scenarios is just to assume that AIS have none of that that's a problem AIS do not need to coordinate they do not have conflicts between them they don't have internal conflicts they do not have any issues in how to organize and how to keep the peace between them none of that's a problem for AIS by assumption they're just these other thing that has no such problems and then of course that leads to scenarios like then they kill us all right like AIS are a monolith uh but I think one of the reasons why I appreciate just your line of reasoning Robin and and how you think is that uh you tap into what seems to be fundamental truth of this universe that you would find here on planet Earth or in a galaxy far far away certain things uh I think can be assumed no matter what the environment is and then I think a lot of your your uh logical conclusions are just like natural extensions of that and so I want to unless you have a thought on that I I was just gonna say I think a lot of disagreements in the world are often based on people having sort of different sets of abstractions and mental tools uh and then finding it hard to merge them across topics so I think when a community has a shared set of abstractions and mental tools even when they disagree about details they can use those shared abstractions too to come to an agreement but when you have people with just different sets of abstractions that's true so I'm bringing a lot of Economics to this right people might be bringing a lot of computer science but I'm gonna play my polymath card and say I've spent a lifetime learning a lot of different sets of conceptual tools and intellectual systems including computer science certainly big chunks of and so I'm trying to integrate all those tools into a overall perspective where uh I I can sort of pull in each observation or insight into this so is this the economic reason the the robots aren't going to come kill us then maybe uh is that is that what you're kind of providing or or just if they kill us they would do us in the usual economic way assure you that nobody will ever kill you okay but you know we have an understanding of the main ways that in the last few centuries people have been killed you know that's been something people have paid attention to how do people get killed how does that happen and so you know theft is like murder is one kind of way people get killed war is another way Revolution is another way or sometimes just displacement where something out competes you and then you don't have any place to survive so in some sense like horses got out competed by cars at some point and they suffered substantially we understand how that works out so that's the sort of thing that can happen to humans we could suffer like the way horses did that's interesting did horses suffer though I mean they are um by population standards yes they diminish significantly did any individual horse suffer and feel suffering as a result of cars seems like a good life to me on an equestrian Farm rather than sort of slaving in a uh you know a cityscape being whipped by a bunny master my understanding is horse population is now you know as high as it ever was but of course you know this is not a fact not as high as you might have projected had they continued previous growth rates so there was a substantial Decline and then now most horses are pets and not work horses right I'm not sure if I'm ready to be a pet but that's a problem for my kids probably hopefully um just quick scenario Robin what's more likely a single monolithic super intelligent AI does the Eleazar thing or we have humans have a robot uh human conflict war and it's like it's more like kind of maybe in the traditional sense where we have two sides and there's and what's more likely so that second one seems far more likely to me uh but I just you should just put it in the context that is humans are at the moment vary by an enormous number of parameters we vary by gender and age and profession and geographic location and wealth and personality and in politics especially we try to divide ourselves up and form teams and coalitions by which together we will then oppose other coalitions and this is just an ancient human behavior which you know we form coalitions and fight each other and we expect that will continue so arguably say democracy has allowed us to have more peaceful conflicts where coalitions fight in elections rather than Wars uh but even in say firms you know there's often political coalitions that are fighting each other and there's always the question what is the basis of the dominant coalitions so there's there's this wide range of possibilities you could have a gender-based you know the men fighting the women you could have an age one the old people fighting the young you could have an ethnicity one you could have a professional one so in a firm it might be the Engineers versus the marketers right and so humans versus robots or robotic descendants is one possible division on which future conflicts could be based that's completely believable and it I can't tell you that can't happen the main thing I'll just point out that will be competing with all these other divisions so will it be the humans versus robots conflict or will it be the old versus young or will it be the word Cells versus the you know shape rotators I mean there's all these different divisions and it could well be that there's an alliance of human word cells and AI word Cells versus human shape rotators and AI shape rotators and that becomes the future conflict you see because in some fundamental sense the division of the conflict is indeterminate that is a fun little thing we understand about politics is whatever divisions you have it's unstable to the possibility of some new correlation forming instead that's a basic thing we understand about politics it's hard to keep stable coalitions because they're so easily undermined by new ones at least well with the human versus robot Coalition like looking into past human behavior uh we tend to be pretty racist but I think when we have robots it would be really easy to forget our internal conflicts when there's a completely different resource like why do we fight why do humans fight it's usually over resources like economic resources and when there is a new species that is subdividing and iterating and growing as humans do that's also sucking up the resources uh and they look like I don't know if they're going to be metal in the future but I that's my current uh Envision vision of them is like metal silicon Terminator type robots walking around and there's only so many resources on the planet and so like that would be a pretty easy dividing line between humans and and robots that I could imagine would make that conflict much more likely uh and so uh regardless of like how uh maybe it's the Eleazar way in which a supermodel lithic super intelligent robot comes and and we have to fight that or like as at some point there's conflict potentially and I might even say likely if there is a different like to call it species there there's a recently this is kind of a side this is going back to like the super intelligence stuff but I think we can now call this just uh AI conflict um the future of Life organization released a an open letter calling for the pause of all General AI experiments a few people signed it Elon Musk Steve Wozniak you've all know a Harari Andrew Yang I just want to get your reaction to this letter and people signing it Robin like would you sign this letter or are you again signing this letter and just what do you think about the idea of this life David can we uh tell people what it says too it's basically a call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than chat gpt4 so this letter says don't go beyond chat um gpt4 Beyond there it gets even scarier let's pause let's halt let's figure out this AI alignment issue first so first just notice that we've had a lot of pretty impressive AI for a while now it's when the AIS are the most human-like with these large language models that people are the most scared and concerned so that suggests that maybe a very Advanced AI will look pretty human-like in many ways and don't forget that our descendants will start to add metal to themselves and become different in movies just like their brain emulations are metal and quite different so again it's not so obvious where the division line would be but to go to this particular letter um first of all with respect to the general concerns they have uh if we had a six months pause at the end of that we really wouldn't know much more than we know now the main purpose of the pause would seem to be to allow say the time for government to get its act together and have Institute some more official law that that enforces such a pause to continue that would be the main purpose for the pause you'd be wanting to support the pause if you were wanting that further event to happen it's not like we're going to learn that much in six months or how about if maybe you were a competitor and wanted to catch up so so then we go to the is this you know so there's first if we could do the pause would it be a good idea and then one of the issues is like who would be participating and who not so like the ideal thing is say we could get a global pause somehow would that be a good idea now we're basically talking about should we basically shut down this technology for a long time until people feel safer about it so um for that issue I think the comparison with nuclear energy is quite striking uh basically around 1970 the world decided to back off a nuclear energy and we basically instituted regulatory regimes that allowed the safety requirements asked of nuclear power plants to escalate arbitrarily until they started to cut into costs and that basically guaranteed this would never become lower cost than there are other ways of doing things and people were okay with that because they were just scared about nuclear power so basically the the generic fear didn't go away and they just generically said this just can never be safe enough for us whatever extra budget we have we want it to be safer and that's the way they put it and so I would think a similar thing would happen with AI the kind of reassurances people are asking for are just not going to be feasible for decades at least so you'd basically be asking for this to be paused for decades and you know it's even hard to imagine eventually overcoming that because you know the fundamental fears as we've been describing is just the idea that they might be different and they might have different agendas and they might out-compete us and that's just not going to go away um so I would see this as basically do you vote for substantial technological change or not and I get why many people might think look we're rich enough we're doing okay let's not risk the future by changing stuff and they voted that way on nuclear power and they might well vote that way on AI um I would rather we continue to I think we have enormous far we can go if we continue to improve our tech and and uh grow but I can understand why many people think nope we got lucky so far we things didn't go too bad we're in a pretty nice place why take a chance and change anything so that's all if it was possible to actually have a global enforcement of such a pause and then a further law but of course that just looks really hard that is you know this technology is now pretty widely available that is you know it might be that the best new systems are from the biggest companies that can afford the most Hardware to put on it but the basic software technology here is actually pretty public and pretty widely available and so uh you know over the next few decades even if you manage to say no no more than a billion dollar project doing this you're going to have a lots of less than a million dollar projects doing this and of course it'll be hard to have a global ban and so the US now has a commanding lead and the main effect would of a delay if it's not Global would be to take away the U.S lead and it's just this is looks like a hard technology to ban honestly you know you might be able to get Google and open Ai and Microsoft or something to pause their efforts because you know they are big companies with pity public activities but um and Robin I'm trying to understand so even if it was enforceable understand the reasons you gave why it's not enforceable and it's very difficult to to do some sort of a you know Global ban of some sort let's say it was for a minute would you support it do you think this is worth pulling the fire alarm over but again I think it's comparable to say genetic engineering or nuclear energy or some other large technologies that we've come across in the last few decades where there really is huge potential but there's also really big things you could be worried about and honestly I think you just have to make a judgment on the overall promise versus risk framing you can't really make a judgment here based on very particular things because that's not what this is about we made a judgment that nuclear energy to just back off and not use it that much that's a judgment Humanity made 50 years ago within the last few decades we made a similar judgment on new genetic engineering basically nope we just don't want to go there for humans at least and we may be about to make a similar decision about AI but honestly this trend looks bad to me I don't because many people think social media is a mistake and maybe we should like undo that and go back on that so the trend of blocking technological progress is bad to you in general whether it's nuclear or genetic engineering or social media or AI or or any of these things right I actually am concerned that this is the future of humanity actually here um so I did this other work on grabby aliens on sort of the distributions aliens of space-time and in that framework the most fundamental distinctions between alien civilizations is the one between the quiet ones who stay in one place and live out there history and go away without making much of a mark on the universe and Loud ones who expand and then keep expanding until they meet other loved ones and I can see many forces that would tend to make a civilization want to be quiet and that's what we're talking about here that is even in the last half century the world has become a larger integrated Community especially among Elites whereby regulatory policy around the world has converged a lot even though we have no world government you certainly saw that in covid but you also see it in nuclear energy and medical ethics in many other areas basically the elites around the world in each area talk mainly to each other they form a consensus worldwide about what the right way to deal with that area is and then they all Implement that and so there's not actually that much Global variation in policy in a wide range of areas and people like that I think compared to the world certainly it's reduced Civil Wars of various kinds and people like the idea that instead of Nations fighting and competing with each other uh that we're all talking together deciding what to do together and that that sort of talking may you know deal with global warming it may deal with inequality it may deal with overfishing there's just a bunch of world problems that these people talking together feel like they're solving and people will like this world we're moving into where we all talk together and agree together about what to do about most big problems and that new world will just be much more regulated in the sense that they will look at something like good for energy and then everybody say nope we don't want to do that and let's shame anybody who tries to do that and slowly together limit Humanities future and that could go on for thousands of years and then if we ever have a point where it's possible to send out an Interstellar colony to some other star we will know that if we allow that that's the end of this era once you have a colonist go somewhere else then they are out of your control they are no longer part of your governance fear they can make choices that disagree with what you've done they can then have descendants who disagree they can evolve and become different from the center and come back eventually to contest control over the center so that becomes a future world of competition and evolution that could be go very strange and Stark places but if we would all just stay here and not let anyone leave then we can stay in this world of us we talk together we decide things together we only allow our descendants to become as weird as we want them to be if we don't want a certain kind of weird descendants we just shut it down and that's the quiet civilization that we may become and that's kind of what's at stake here I would say with Banning AI it's one of many questions like that that we are answering about do we want to allow change and new large capacities that might threaten strangeness and conflict so I think this is actually the moment where this podcast episode goes from uh continuing the conversation that we had about AI with Eleazar and all of those alignment problems in that conversation and this actually becomes a part of a larger conversation that we've been having on Bank list for a while now and this has to do with uh the status quo versus Innovation and progress as well as it does with what you were just saying uh Robin about grabby aliens and so I want to try and connect these dots really quick this idea of AI and AI Innovation along with crypto Innovation and whether or not it should be regulated by the elites by the status quo and whether it should be contained and or and are the elites happy with the way the the harmony of the social order and perhaps we shouldn't have new competition and new exploration into the frontier because that is how we maintain the social order because there's nothing new that's happened that what what you're saying that this does is this Keeps Us in a it's like an isolationist approach except it's an isolation approach from like inside of planet Earth and I think uh being the future Tech Optimist that Ryan I are and I think you are as well uh you aren't for that you would like to penetrate that isolationism that has from like the social Elite saying hey let's not experiment with uh crypto or AI or longevity or synthetic biology research let's just like keep everything harmonized and in control and we will use our large centralized power to keep that keep the world under control and then we have this other conversation that we're about to go into which is grabby aliens which is whoever is these alien species that is expanding out into the world chose to not do that they chose to explore the frontier they chose to innovate under the guise of competition of capitalistic competitive competition to innovate and start to expand outwards into into space and I think baked in your argument is that you actually do need competition in order to become uh to explore the frontier and so I'm wondering if we can if if this is a if that was a good summary and B kind of like like do you see that that picture of just like how this concern about AI or concern about progress in general is also links to like this the grabbiness or quietness that you call call in Aliens and maybe you can characterize these these different kind of uh aliens and the choices that they make of it as a civilization yes I thought that was a reasonable summary I think when we see people today discuss the possibility of our descendants uh spreading into the Galaxy they are often wary and a bit horrified by the impact it might have uh that is the sort of people we've become over the last century are people who find that adjarring and even unpleasant scenario uh because it is actually fundamentally jarring and unpleasant so I am with you in wanting to allow us as changes but I want to be fully honest about the costs that we are asking the world to accept that is if you wanted our descendants to just stay pretty much like this individual us and event in sorry indefinitely that's not what we're talking about here if the cost of allowing our descendants to expand into the universe and explore Technologies like Ai and nuclear power Etc is literally alienation that is we are now alienated from our ancestors our world and lives are different and we feel that at some level that we were not built for the world we're in this is an alien world that we're in compared to the world we were built for and we we feel that deep inside us and that will continue it will only get worse and you know the time it'll get better is when we can go change who's inside us to become more compatible with these alien worlds but that will make those descendants even more different from us so that's that's really the cost you have to be asking so this future world of strange new technologies is also a competitive world and that competitive World includes conflict it includes some some people this late some kinds of things displacing others some things just being shunted aside and marginalized and it may even include War violence it's certainly probably includes radical change to Nature not just biology on Earth but you know our descendants who go out into the universe would likely not just pass by and plant Flags they will take things apart and rearrange them and radically change them and sometimes that'll be ugly and sometimes it'll be violent that sometimes it'll leave crude ugly waste and be inefficient where they could have done it better that will be the course and this universe we see now that's pristine and Nick you know the way it was from long ago will just be erased that's the cost so I want to explore this idea of um of grabby aliens and I'm sure listeners who are being thrown into this like odd adjective grabby uh might be a little bit confused and so I'm hoping we can explain the nature of grabbiness but I'm hoping we can actually do it inside of the context of planet Earth and human history because I think that it naturally extrapolates into the Galaxy because this is where the the place of grabby aliens that's where they play and first I think I want to ask you the question humans are we grabby because if you look back in history you have uh some sort of quiet human races quiet human species human tribes that uh found that that were found by the grabby humans you can call these The Conquistadors or the the conquerors right the Roman Empire very grabby Empire uh any sort of Empire that like looked outward had expanded I would under your uh trying to understand like Robin Hansen's like works of of grabbiness I would call any sort of Empire that expanded grabby and then these grabby Empires found the quiet like tribes that were peaceful and grabbed them uh and then assimilated them into the grabbiness and so this is kind of how I would ask I would like present this inside of a context that that we understand because we understand human history but I want to ask you this very basic question of just like human nature are we grabby so almost all biology has been crappy and therefore almost all humans but it's not so much about our nature so the fundamental Point here is there's just a selection effect that's the key point that is if you have a range of creatures with different cultures or biological Tendencies and some of them go out and expand and others don't if there is a place they can expand to and that would actually you know could reproduce there then there's a selection effect by whichever ones do that they then come to dominate the larger picture uh that's just the key selection of X so there may be many alien species in civilizations in the universe and maybe most of them choose not to expand but the few ones who do allow expansion they will come to dominate by space-time volume the activity of the universe and that's how Evolution has worked in the past it's not that all animals or all plants are aggressive violent and hostile it's that they vary and some of them have a habit of sticking one place and hiding another have a habit of jumping out and going somewhere else when they can and the net effect of the variation and all their habits is when there's a new island that pops up it gets full of life because some of those things that move land there and grow and any new a mountain grows higher and then New Life shows up at the top of the mountain and a you know new Niche opens up of any sort where life is possible there and then some life goes there and uses it that's just the selection effect so that's what we should expect in the universe there's the question of which way we will go and if I focused on humans I'd say it's a trade-off between what would happen if we don't coordinate and how hard we will try to coordinate so an uncoordinated Humanity there's certainly enough variation within Humanity some of us would go be grabby it might not be most of us but certainly some of us given the opportunity would go grab Mercury or Pluto or whatever else it is and then go out and grab farther things we might choose to prevent that we might choose to organize and coordinate so as to not allow that those things to happen and we might succeed at that we are we have enough capability perhaps to do that and so then it becomes a choice will we allow it but basically whatever you're talking about something that that only takes a small fraction of us to do and we vary a lot then the question is will we allow that variation to make it happen or will we somehow try to lock it down the bankless audience is pretty familiar with the idea of moloch it's a it's a topic that we've Revisited a number of times are you familiar with uh molok I'm familiar with the famous Scott Alexander essay on it and uh although I think the concept isn't entirely clear in that essay sure yeah so moloch just being like the idea of the prisoners dilemma say you have two uh or two or almost any number of human tribes on the earth and most of them decide to be quiet and peaceful it really only takes one to be grabby and that one will come to dominate the Earth because it chose to be grabby and it grabbed everything else so it's almost this prisoner's dilemma about uh if you choose to not be grabby you are implicitly making the choice of being grabbed by the larger uh larger tribe that has elected to be grabby and I think this is this is how we extrapolate this into the future with uh your grabby aliens thesis where there are sure there are many uh civilizations out there maybe there are many like us that only exist on one planet and we have a bunch of elites on the planet that say hey let's not uh investigate uh Ai and let's not investigate uh longevity or or genetic engineering let's just stay put and we require Call These Quiet aliens or you know us being the quiet aliens what that the choice being made is that grabby aliens are eventually going to arrive on Earth and grab us and so if you don't become a grabby alien you are going to be grabbed by somebody else and so this is my my why I think this this moment in human history when we have this letter saying hey let's pause AI research is what you are focusing on is like well this is a very important decision point for Humanity as to whether we choose to be quiet or not quiet and of course this isn't the only choice but this is one of the many choices down a long list of choices that could actually decide culturally what we want to be in at least for the short term is this is this how you see this this fork in the road as we currently are well let's just clarify say in a peaceful Society like ours we could think of a thief as grabby and then we could say well if we don't steal somebody else will steal so I guess we should steal and you can imagine a world where that was the equilibrium but if we coordinate to make law then we can coordinate to watch for some a thief and then repress them sufficiently so as to discourage people from being Thief so a universe of sufficiently powerful aliens could coordinate to prevent grabbing if they wanted the claim which I believe is true that in fact the universe hasn't done that it might be that within our human society we have coordinated to enforce particular laws but out there in the rest of the universe it's just empty and there's pretty much nobody doing anything through most of it that we can see and so it is really just there for the graphing no one's going to slap our hands down for grabbing the stuff we can just keep grabbing until we reach the other grabby aliens at which point then we might try to set up some peaceful law to keep the peace between us and them but we don't have to fight wars with other grabby aliens per se but there's all this empty stuff between here and there then it seems like you either grab it or somebody else does I'm wondering if we may have blown past some listeners here who um heard us just talking about alien civilizations they're coming to like grab Earth and they're like what are you guys talking about where's like all of these alien civilizations Robin David we don't see them anywhere when we look up the stars but that is what your your grabby aliens paper is all about I think the synopsis of the gravity aliens uh paper you know packs this Punch If loud aliens explain human earliness quiet aliens are also rare Robin can you sort of explain what your grabby aliens uh idea actually is and why there might be future alien civilizations that are expansionary and uh coming our way and why we might want to be a civilization that that rises up and expands in our own sphere of influence in order to meet them so uh we're gonna go through this briefly and quickly uh turns out there's just a kurzeg video that came out yesterday that how has 2.6 million views that's explaining some of the basics of gravity aliens in case people want to see that Kirk is that the uh cute animations that do these very technical things in very nice ways congratulations on that by the way um so the key idea is um we wonder about the distribution of aliens in space-time and one possible Theory you might have is that we're the only ones at all and in the entire space time of that we can see there'll never be anybody but us in which case the universe would just have waited for us to show up whenever we were ready we can reject that interpretation of the universe because we are crazy early so our best model of how advanced like a life I should appear says that we should be most likely to appear on a longer lived Planet toward the end of its history and our planet is actually very short-lived you know our planet will last another billion years for roughly 5 billion years total of History the average Planet lasts 5 trillion years and because life has to go through a number of hard steps to get to where we are there's actually a power law in terms of when it appears as a function of time the power being the number of steps and so say the steps are six then the chance that we would appear toward the end of a longer lived Planet rather than now on this planet is basically that factor of a thousand in their lifetime raised to the power of six for this power law I.E 10 to the um 18 more likely to have appeared later on in the universe so we're crazy early relative to that standard and the best explanation for that is there's a deadline soon the university is right now filling up with aliens taking over everything they can soon and I say a billion years or so it'll all be full and I'll taken at which point you couldn't show up later on and be an advanced civilization everything would be used for other stuff and that's why you need to believe they're out there right now so now that you've got to believe they're out there right now you wonder well how close are they what's going on out there and for that we have a three parameter model where each of the parameters is fit to a key data point we have and this model basically gives you the distribution of aliens in space-time and you know if you like we can walk through what those parameters are and what the data point we have for each one is but the end story is civilizations typically expand at a very fast speed a substantial fraction of the speed of light they appear roughly once per million galaxies these uh gravity alien civilizations and if we head out to meet them we'll meet them in roughly a billion years spending near the speed of light so they are quite rare that rare but not so rarest to be empty in the universe that is once per million galaxies there's many trillions of galaxies so that means there are millions of them out there uh and right now the universe is roughly half full of them so that seems strange the universe looks empty but you have to realize there's a selection effect everywhere you can see is a place where if there had been aliens there they would be here now instead of us so uh the reason things looks empty is because you can't see a place where they are because they would move so fast from where they are to get to hear that here would be taken the fact that we are not now taken here says that no one could have gotten here and therefore I think so if you were able to look out into the stars and see the aliens that's nonsensical because if that would be possible they would have already grabbed you by that time right because they move so fast this relatively small volume of the universe where you could see them and they haven't quite got here yet most of the places you could see they would be here and grabbing you doesn't necessarily mean destroying you it just means possibly expanding to the Border such that you can't expand into their borders it would be enveloping you and then changing how the world around you looks so we can be pretty sure we have not now been enveloped by a graviolian civilization because we look around us and we see pretty native stars and planets which are not been radically changed so yes in the future we might be involved and other things out there might be involved well we're not now we couldn't see this situation we're in if those alien civilizations had come here see and this is actually why this intersects with the Eleazar uh AI problem because the way that you said that uh the uh civilizations that are out there would have come and enveloped us and then changed the environment that is around us uh I think hopefully leaving us at peace but like this is the AI alignment problem in another form where like another Rogue alien civilization is also another paper clip maximizer and they're out just Gathering all the resources doing the things that they do according to their values hopefully their values are that when they do or expand into our civilization they leave us alone because some alignment is still there but it is the same uh fundamental structure of like there is these goals and alignments with the universe around them and these aliens expand and they change the atoms of the matter that they expand into and because we haven't seen that yet because that's the assumption that we have but because we haven't seen that yet you are able to in your uh aliens paper actually like kind of place Us in the Arc of of History because of this assumption that uh grabby aliens are grabby and that they will attempt to grab one to add to that I mean they might be artificial intelligences as well wouldn't they Robert sure almost surely they are yeah you know anything you know within a thousand years I expect our descendants to be almost entirely artificial and certainly within a million years and these things would be billions of years older than us so yes they we would be our artificial descendants will meet their artificial descendants in maybe a billion years and they won't have saved something like us now I can give you a little more optimism in the sense that if aliens these gravity civilizations appear once per million galaxies If the ratio of quiet to loud ones is even as high as a thousand to one that would mean that in this expansion that they've been doing that they will do they'll only ever meet a thousand of these quiets as they expand through a million galaxies and so these rare places where an alien civilization appeared to be pretty special and worth you know saving and isolating because raviolian civilization should be really obsessed with what will happen when they meet the other grabby civilizations they're being really wanting to know what are these aliens like because they will have this conflict at the border and they will wonder are we going to be out fast somehow will they trick us somehow well what's going to happen when we make the board but they might be a national park out of us then light but so so my turn every gravity civil every gravity civilization will be really eager for any data they can get about what are aliens like and so this small number of quiet civilizations they come across will be key data points they will really treasure those data points in order to just give us some data about what could aliens be like and so that would be a reason why if aliens came and enveloped us they would mainly want to save us as data about the other aliens now you know that doesn't mean they don't freeze dry us all and run experiments Etc I mean it's not necessarily going to let us just live our lives the way we want but they wouldn't just erase us all either well bankless Nation I elect Robin Hansen to make the case for uh not freeze drying us and to to preserve us to the aliens uh if if they come at some point in time but this is not necessarily uh in your term that they're coming but it's it's more kind of the rate of spread one interesting aspect of the model is it's would it be accurate to say Robin that the model predicts alien civilizations to like spread like cancer and I mean that maybe mathematically and you know without the negative connotation that that brings well alien civilizations are created even more like cancer so in your body um you have an enormous number of cells and in order for one of your cells to become cancerous it needs to undergo roughly six mutations in that one cell during your lifetime so that's basically the same sort of hard steps process that planets go through planets are each in order to achieve an advanced civilization they also need to go through roughly six mutations that is the mutations are each unusual thing has to happen and then the next unusual thing happens in the next unusual things happens until all six have happened and then you get something like us so the key idea is there's you know a million galaxies Each of which have millions of planets and then all of these planets are trying to go down this path of having all these mutations but almost none of them do successfully by their deadline of the life no longer be possible on that planet and it's a very rare planets like ours for which all six mutations happen by the deadline of life no longer being possible on the planet and that's how cancer is in your body that is 40 of people have cancer by the time they die and that means one of their cells went through all six of these mutations but that was really unlikely vast majority of cells only had one or zero mutations and so life on a planet reaching advanced level that it could expand the universe is mathematically exactly like cancer right uh and so it follows the same parallel with time actually so the probability that you get cancer as a function of your life is roughly time to the power of six because it takes roughly six mutations that's why you usually get cancer near the very end and the chance that planet will achieve Advanced life is roughly the power of six or time and that's why in fact in the universe universe is appearing over time faster and faster according to roughly a power of six because of this exact parallel and so the very early Universe had almost nothing and then recently we've showed up but around us they're all popping uh and the rate at which they're appearing now is much faster than it was in the fast past because of this parallel and uh it shouldn't be lost on listeners that cancer is grabby cancer falls in the grabby category and so there's a bunch of quiet cells that are just minding your own business doing their job in harmony with their neighbor cells and then one cell goes Rogue and decides I'm going to grab everything that I can around me and I'm going to grow to the my best ability uh and so like it's just interesting to see no matter what scales or what uh mediums we perceive to be whether it's at the biological cell it is human species as a whole it is this theoretical AI uh super intelligent robot but like these same structures continue to show up uh and so Robin thank you for helping us navigate all of these different planes of existence and being able to reason about them all at once well we did a brief survey but right there's a number of different rabbit holes that we you know did not go down how about this because we've got a crypto audience Robin do you have any hot takes on crypto what do you think of this stuff how about that check that box I mean I don't have a new take on crypto my old take has always been um you know for any new technology you need both some fundamental algorithms and some fundamental Tech and then you need actual customers and some people to pay attention to those customers and to their particular needs and you have to adapt the general technology to particular customer needs crypto unfortunately moved itself into a regime where most of the credit and celebration and money went from having a white paper in an algorithm and not so much for actually connecting to customers and so unfortunately there's this huge appetite for tools and platforms under the theory that if we make a tool and platform other people will do the work to connect that to customers and unfortunately there's not so many people who are sitting in that next rolls but them succeeding at that task is the main thing that will makes the rest of crypto succeed or not there's plenty of tools and platforms not so many people trying to Market concrete products to particular customers and holding their hand working with them when it doesn't work them changing it somehow iterating in order to make a product actually work for concrete customers that's how pretty much all business Innovation needs to happen in crypto as well as everywhere else it doesn't crypto wasn't different by this regard it's just that crypto sort of fell into this world where you got all the recognition and attention and money by writing white papers and implementing their first version of the algorithm then you then ship to then moved over to another company to write another white paper in algae right instead of actually staying with the algorithm and trying to get customers to use it so I wish crypto well there's lots of interesting possibilities there but that's in my mind the major problem with crypto is the lack the the neglect of actual customers yeah and the messy details of making customers happy everyone in the crypto industry is recently at least the the VC landscape is talking about everyone is trying to sell picks and shovels and no one's bothering to actually sift for gold right so maybe we used to need some more gold diggers out there I I think so and this is Robin being a utility show me the utility and we certainly understand uh that take on crypto and we certainly have some work to do in that area um but I think we should have you on sometime again Robin and there was so much we could pick your brain about I know you're a huge advocate for prediction markets as a way to solve for things and this is once the process of creative institution ideas that I think crypto people will be more interested than most crypto people are pretty open to creative institutions oh we are so you got to come back and talk to us about sort of Institutions and some of the new creative institutions because I don't know if you noticed Robin but around us it seems like a lot of our institutions are crumbling or falling to pieces or leasing their trust or just in the worst case we're just locking down and not allowing much Innovation or change it's 100 so oh all right well let's leave this conversation exactly decaying slowly they're still not innovating and growing this is a to be continued bankless Nation if you haven't had enough of Robin Hansen I certainly haven't I could talk to this man for hours uh then let us know and we'll see if we can get him back on another time but you you have helped me understand a bit more about uh artificial intelligence and and for that I I certainly hope are you going to sleep tonight I'm going to sleep much better tonight uh honestly you know yes uh so thank you uh you know there are some things in there I I think I need to re-listen to and think about a little bit more these descendants being uh so much unlike me that that might make me concerned but I'm far less concerned than after the Eleazar episode so I appreciate that it's a natural concern as it did that's right uh action items Free bankless Nation we'll include a link to the Ellie zuridkowski episode we're all gonna die it was called that was seriously the title Robin uh and we'll also include a link to the AI foom debate which we talked about that term I just learned what that term was the age of m a book that I'm adding to my AQ from Robin Hansen of course talking about um this is artificial Minds is it not Robin artificial implementations of ordinary human mind there you go that sounds fascinating and of course grabby aliens there is a kurgazzat video as well as the original website grabby aliens.com will include all of that in the show notes risks and disclaimers gotta let you know none of this has been Financial advice it's not even a space-faring civilization advice I don't think you could definitely lose what you put in but we are headed west this is the frontier it's not for everyone but we're glad you're with us on the bank list Journey thanks a lot thank you
Info
Channel: Bankless
Views: 44,426
Rating: undefined out of 5
Keywords: bitcoin, ethereum, defi, bankless, crypto, decentralized finance, crypto finance, open finance, ether, eth, btc, token, tokens
Id: 28Y0v5epLE4
Channel Id: undefined
Length: 105min 12sec (6312 seconds)
Published: Mon Apr 17 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.