Is the Universe a Code? with Nick Bostrom

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] you have fallen into Event Horizon with John Michael gadier [Music] in today's episode John is joined by Nick Bostrom Nick Bostrom is a philosopher at the University of Oxford known for his work on existential risk the anthropic principle human enhancement ethics super intelligence risks and the reversal test he received a ba in philosophy mathematics mathematical logic and artificial intelligence from the University of Gothenburg in 1994. he earned an M.A in philosophy and physics from Stockholm University and an MSC in computational Neuroscience from King's College London during his time at Stockholm University he researched the relationship between language and reality by studying the analytic philosopher w v Quinn in 2000 he was awarded a PhD in Philosophy from the London School of Economics his thesis was titled observational selection effects and probability also in 2000 he held a teaching position at Yale University and later became a British Academy postdoctoral fellow at the University of Oxford in 2002. all the visit Event Horizon makes the show for a few reasons we love to do it we wanted to listen to a show like this and another reason what we do to help give other people something they can relax to and learn from that's why when people tell us they fall asleep listening to the show most of the time on multiple playthroughs it doesn't bother us at all it's actually extremely gratifying and knowing that we're quite happy that today's video is sponsored by endell if you need help falling asleep or just want to relax you can use Endo an environment-based app that takes everything we know about sound and combines it with truly cutting-edge technology this all results in providing you with personalized soundscapes that helps you focus relax and get a satisfying night's sleep why are we a science-based show recommending endell because we know it's informed by science created with science and backed by science so for a great way to support Event Horizon relax and get help sleeping be one of the first 100 people to download ndel at ndel.io forward slash Event Horizon and you'll get a free week of audio experiences thank you to endell for sponsoring today's video remember to subscribe to Event Horizon so you never miss an episode Nick Bostrom welcome to the program pleasure now Nick you have given me over the years many existential crazies which I absolutely love I I love nothing more than a good existential crisis but also the Fermi Paradox and I spend no small amount of time thinking about the Fermi Paradox and why we just see the great silence when we look out into the universe verse so my question for you is that with technological development it's clear I think at this point that we could say that we could create some very very dangerous Technologies molecular nanotechnology artificial intelligence super intelligence things like that that could pose an eventual threat eventual not only to us but if it escapes it could pose a threat to anyone in the Galaxy as this enormous super intelligence expands should it choose to do so does this deepen the Fermi Paradox because it would seem to me that civilizations would put up warning posts you know sending out study messages that hopefully somebody could decipher and say we should not build this technology but we don't see that all we see is a silence so is it possible in your mind that we really are alone in this galaxy and that we need to watch what we do technologically in order to not go extinct yeah it's possible I mean if we received a warning message like that do you think we would heat it I would say very likely not because of geopolitical situations everybody loves a good weapon and these things tend to get developed and if you look at the development of the nuclear bomb it there were really no skids put on it it just happened so that's what worries me most is that we will develop Technologies like this regardless of the dangers right and so I think there are quite strong competitive reasons for humans racing forward and then similarly if there are alien Civilizations for them to develop technology that would allow them to colonize space and since we haven't seen any I think then that's evidence for there not being any so to me the Fermi Paradox has never really been that paradoxical in that we have at least one good explanation and that might be additional explanations but one would just be that it's very unlikely for any given planet to develop life at all let alone technologically advanced life and it seems to be that everything we know is perfectly compatible with this hypothesis if the universe is large enough I mean if it is in fact infinite as it appears to be then there would be infinitely many planets and even if the probability for any one of them producing intelligent life was astronomically small with an infinite number of rolls of the die it would still get some coming up on this unlikely value or intelligent life develops and so that would be even if we assume a great filter that makes it unlikely that any Planet results in life we would still expect there to be a lot of life but just very widely separated and that that seems to match what we observe now it could also be that there are additional possibilities that would explain the thermal Paradox that were in some sort of zoo for example more and more kind of complex explanations could be devised but at least one and so it seems to me a paradox is where there is kind of two things that are in conflict with one another and it's hard to see how you could reconcile them but in this case I don't see what is exactly opposing what in a theoretical explanatory sense in the case of an infinite Universe which we've measured essentially that the geometry of the universe is flat so it looks like it very well could be infinite beyond the observable uh Universe bubble that we have in an infinite Universe how does this affect the concept of it all being a simulation it would seem to me to run an infinite Universe you would need infinite resources right yeah if you actually wanted to implement an infinite Universe in in sort of with local uh granularity but to generate the universe that appears from some Vantage Point as if it's infinite you wouldn't need infinite resources like if you're just speaking crudely Han canopy around us with sort of a little uh needle tricks through it that then you let light shine through to depict the Stars a more sophisticated version of that kind of thing would create the Illusion for the people inside the theater that they were actually living in an infinite universe and it would take only a finite amounts to run and and we could kind of get some sense for the possibility that very realistic appearances could be generated with a small amount of resources if we just reflect on our own brain's achievement every night when they produce dreams that in some cases appear quite realistic and convincing if if that can be done by our own humble little brains then imagine what could be done by a technologically mature civilization running planetary sized supercomputers so all the things that we see certainly could be generated by such a device and so essentially it would render that what you are looking at and you would just see a superficial sort of universe looking out there but within your own sphere Earth essentially the simulation would be complex but the further you look out the less complex it would be which actually is what we see because we you know lose resolution the further we look that's right I mean you could also make it seem I guess such that it would appear complex at greater distances as long as we weren't actually experiencing all of that complexity like if we just received some low bandwidth signals that that were immense super civilizations in in our neighbor galaxies that would make it appear as if that were maybe more complexity a little further out but our observations could still be rendered relatively cheaply in that the main cost would be simulating our own brains and then at some crude level the world around us and then this low bandwidth signal from the neighboring Galaxy now biases and you've uh written on this the anthropic reasoning we always color our thinking whether consciously or unconsciously with the fact that we are human and we are humans with human brains thinking on on these subjects so how does one go about removing anthropic reasoning from thinking about things like technology or aliens yeah so anthropic bias is a very particular kind of bias a rising from the fact that all our observations sort of filtered by the precondition that there be some Observer in a suitable time and place to have the observation and in most everyday contexts this doesn't really arise but there are particular contexts in which it becomes important such as when we are thinking about the Fermi Paradox as we were talking about you might naively think that we have observed one planet in detail around and and here intelligent life evolved and so that would be evidence that the emergence of intelligent life is easy that it tends to happen on on planets as long as the macroscopic variables are roughly in the right range but that I think would be to fall foul of an anthropic bias in in that there is this obvious observation selection effect that guarantees that even if intelligent life is extremely rare it would still be the case that every Observer would observe themselves in a place where this possibly very unlikely thing happened all observers will observe themselves having evolved from some planet no matter whether that is very likely to occur on any one planet or very unlikely to occur on any Planet the it looks as if the observational predictions of both of those hypotheses are the same and match what we observe and so our observations don't actually distinguish the two at least not the mere observation that we came into existence on this planet now in regards to an even bigger issue the the anthropic principle or the fine-tuning problem the tour intimately related so this universe appears from what we can see to be fine-tuned for the development of not only life but just matter itself if you change a few parameters and matter can't exist yet this universe which is you know all that is barring a Multiverse which we can't test then that would seem to suggest that it is a simulation right or is that a step too far I think it's a step too far I think it certainly is prima facial puzzling that the constants of the universe are such that if they were very very slightly different than we would have had a life lesson maybe matter less universe that things are as it were balanced on a knife's edge it tips a fact that cries out for some sort of explanation and different explanations have been proposed of course theists have the explanation that our universe was the result of intelligent design you could imagine if if our universe were created by an intelligent being who particularly Wanted Life to exist then that being would adjust the constants to permit life to exist so that would be one type of explanation another is the Multiverse hypothesis where our universe is fine-tuned but the whole ensemble of universities that exist need not be fine-tuned you could have a very simple mechanism that just spews out all kinds of different universes and that could be a very simple hypothesis because it doesn't need to be building in some very complex and precise criteria for what it is producing and then given that Multiverse instantiating a wide range of different parameter settings then you invoke an observation selection effect to explain the fact that we find ourselves in a universe that appears fine-tuned namely only those universes would contain observers and so all observers given such a Multiverse hypothesis would see this apparently fine-tuned universe that we see so those would be two possibilities like a third might be some kind of simulation hypothesis then of course you still have the question of the basement Universe the universe in which the computer running the simulation is built what about that universe is that is that fine-tuned and then you might ultimately again have to resort to a Multiverse explanation or an intelligent design explanation to to account for the fact that that universe is fine-tuned I mean yeah the further alternative would be and we don't know whether this is possible to actually come up with a very simple fundamental theory that with just a few axioms implied at the the precise parameter values that we see without using an ensemble of universes as part of the explanation some sort of super duper symmetry that just makes it pop out that the gravitational constant had to be just so on the the other constants had to have values Justin that that would kind of be surprising but at least if we map out the space of logical possibilities for what an explanation could look like then that should also be included now the idea of a simulation you have made the case that three criteria could be set forth to Define and at least ask the question do we live in a simulation based on the concept of an ancestor simulation now do you think that our technological development and our mindset as we are would we ever create our own ancestor simulation and could we well I would say that if we do then that would dramatically increase the probability that the simulation hypothesis is true that is if we get to the point where we are creating our own ancestor simulations we should conclude that we are almost certainly living in a simulation ourselves and it looks like it is possible to do this not for us today but that we have a path that we could pursue a technological trajectory that will eventually lead to the development of capacities that would enable us by using a tiny fraction of our resources to create vast numbers of detailed computer simulations including simulations of simulated brains that would be conscious of their simulated worlds and such that the experience of these simulated Minds would be similar to the experience that we have I think that's a physically possible technology that technologically mature civilizations will develop one question I have for you this is somewhere off the wall is say it's a fantasy simulation in other words this simulation that we might live in is someone else's fantasy it is their ideal virtual reality and that we might seek to create ideal virtual realities for ourselves in other words uh video games so to speak on a Next Level basis that we descend into virtual reality and and sort of forsake life in the universe but it is it possible perhaps that the reason for creating a simulation is virtual reality and we simply live in somebody's video game possible yeah I mean sure a lot of things are possible in this scenario I guess you could distinguish the type of simulation that would just be for the simulator to observe from the outside like SimCity or one of these World Box simulations versus a computer game where the player has an avatar inside the world and is sort of actively shaping it and participating in it both of those are conceptually possible now for scientific reasons creating an ancestor simulation say for example you are some entity a boltzmann brain in a dead universe and you're like what was this universe like I'm bored everything's black and I just have come into existence I've popped in and I'm going to simulate what this was do ideas like that I mean I guess what I'm asking here is that why would you create an ancestor simulation I mean would it just simply be academic in other words you want to try to reconstruct what once was in a universe say the far future when the the red dwarfs and everything are blinking out in this universe any civilization that's there might be like well what was this like because it would be extraordinarily hard to study cosmology for example when you can't see everything and you know it's huge expanded universe are those would that be a motivation for for creating an ancestor simulation I think if you're a boss man brain you're in a bit of a pickle in that you probably won't last that long most pulse Mondays this would be sort of thermal fluctuations that just happen by pure chance to take the shape of some sentient brain but a free-floating brain in the middle of some Intergalactic gas clouds wouldn't survive that long in fact almost all balls one brains would have observations that would be to some high degree chaotic I think for every for every mind that has the relatively coherent shape that that ours do as I mean as a result of having evolved and having to be functionally adapted to survive in the world around us for every such boltzmann brain that that those kinds of experiences though that would be trillions and trillions of ones that were your just more haphazard since the world's fine brains wouldn't have been filtered by any kind of functional adaptiveness Criterium and so if we were Baltimore brain I think we should expect to see something more chaotic like the kind of smallest possible experience like if you started with a single atom and then started adding atoms at random what's the smallest possible cluster that would at least produce some sort of sentence experience and and maybe it would be some kind of anal something analogous to like watching static on the television but rather than a big high resolution screen just a few pixels in some sort of blurry diffuse Consciousness I think it would get astronomical and many more of those then then you would get a high level highly organized anthropomorphic style Minds like the ones we possess one wonders would affect time time would have on that because if you if if you know time is inherent to this universe space-time and it may be for the boltzmann brain a blink of an eye its existence and it just packs in a whole bunch of of simulation into that that very brief existence but to us it seems like trillions of years well I'm not sure why subjective time would flow faster for a boltzmann brain If if it were made of the same stuff as we are made of if it had if it had biological neurons and stuff like that now if it were if it's sort of spontaneously materialized us some kind of super dense neutron star pattern then yeah sure maybe it would do more in one minute than we would do in a minute although by that Target it would also be more likely that the minimum viable product would last for a lot less than a minute and so again I would expect the majority of experiences to be of the minimal very short-lived very chaotic kind that that I described earlier now simulation Theory itself do you think in your view that it's testable I mean could we ever even determine if this is an ancestor simulation or is it going to be something that we forever wonder about because we can't eliminate the possibility yeah I think there are different observations that would provide evidence for or against the simulation hypothesis at one extreme we have say the possible observation of seeing a window popping up in front of you with text explaining that you're in a simulation and offering you like further information that you could click on um that would be pretty convincing for the simulation hypothesis and another type of evidence that would increase the probability of the simulation hypothesis is if we ourselves continue down the path to developing the technology needed to build them the closer we get to technological maturity and and the the more we retain our present interest in creating these types of simulations the more likely it is that we will create our own answers to simulations and and that then would eliminate the the two alternatives to the simulation hypothesis if if your listeners have read the original simulation argument paper you remember there are these three Alternatives that it tries to show at least one of them is true and one is that almost all civilizations at our current stage of development go extinct before reaching technological maturity so so that hypothesis would go out of out of the window if we ourselves reach technological maturity that would suggest that that wasn't so hard the second alternative is that out of all these civilizations that do reach technological maturity almost none of them remain interested in using their planetary size supercomputers to to actually create large numbers of ancestor simulations but again if we reach the point where we were doing this kind of stuff that would also eject this second alternative and that that would buy the simulation argument only leave the third the simulation hypothesis itself and you can check the simulation argument for for the reasoning behind this like some simple probability Theory but what what that means is that yeah anything that makes it's more likely that we will reach this point where we are creating our own ancestor simulations would also be empirical evidence in favor of the simulation hypothesis and of course conversely the absence of either seeing windows in front of us announcing that we're in simulations or observations of things that make it less likely that that we will make it through all is evidence against the simulation hypothesis for example if we discovered some extremely dangerous technology that looked like every civilization would discover at some point before they developed the capability of creating ancestor simulation some technology on the way there that would sort of inevitably destroy whoever discovered it if if we if we find evidence of that then that would increase the probability of the first alternative and so hence who would remove the reasons we have for believing the simulation hypothesis and reduced the probability of that hypothesis now another idea that you have put into the debate is the idea of a Singleton and broadly speaking in Singleton for all it's it's pluses and minuses I mean such a thing could be absolutely amazing or it could be an absolute Nightmare and a mortal dictator or an immortal artificially intelligent dictator but one thing that it could do is impose ethics and say no ancestor simulation because it's unethical you're you're creating beings so to speak in an artificial environment and you can't you just can't do that might that be the answer to one of the three that they simply all civilizations simply never create ancestor simulations because they they feel that it's unethical and something makes a singular decision a Singleton that that stops IT yeah that's the um second alternative this strong convergence towards refraining from creating access to simulations that could be perhaps implemented if there were a very strong convergence to develop singletones so that each technological civilization became a Singleton and then Additionally you would have to postulate that all these different Singletons scattered through the universe arrived at the same conclusion as to what they should do with respect to creating answers and simulations I.E refrain from it but that that that certainly is one way in which the simulation argument could be true do you think that that could be a case of consensus by just pure logic in other words you reach a Singleton and the answer to the question is always the same in other words they go through the logic of it and they say we don't do this and the next civilization over that's developed Singleton also arrives at the same conclusion and that there are absolute truths absolute conclusions possible in the universe that Singletons will arrive at unanimously yeah that seems possible it might appear incredibly unlikely but we need to remember here that the Singleton question that would have the ability to do this would presumably be super intelligent that if they could develop the technology to create huge numbers of ancestor simulations that would also have the ability to increase their own intelligence or develop machine super intelligence and so the likelihood that there could be some such very strong and striking convergence is increased I think by the fact that this would all be super intelligent minds and they might kind of pretty reliably tune into the same considerations so if there is some consideration that actually decisively disfavors them creating answers and simulations they might Discover it and fully realize its ramifications so in the development of a Singleton whether it's a human Singleton where everybody's on the same page so to speak and everybody's augmented in brain augmentation and everything else and everybody starts arriving at the same conclusions do you think that that leads to a universe essentially of singular thinking and that that could solve the burning Paradox and that everybody just concludes no nope nope don't go too crazy with technology don't don't develop AI don't go out into space and create galaxies spanning Empires no Dyson spheres things like that instead focus on maintaining your resources for as long as you can and just hang on and that's the simple solution of the Fermi Paradox is that eventually everybody thinks the same he uh I think the possibility of a scenario where everybody thinks the same about some of these fundamental question is increased if we postulate super intelligence because one source of variation is error people make different errors and so they come to all kinds of different beliefs and opinions about things with superintelligence that would be less likely so the increase the chance that there is a convergence I don't think it's sufficient though to have a convergence about empirical beliefs about how the world works that would really have to be some convergence of motivation right so that they would all be motivated to stop where they're building super intelligences or colonize space and again it's conceivable that I could be a fairly strong such convergence if there are instrumental reasons that were discovered for refining from these things I would say though that with space colonization it's a little bit perhaps harder to see why why exactly they they would convergently arrive at a decision to refrain from this with simulations I mean there are like kind of ethical considerations and maybe game theoretic reasons for why this could be convergently discouraged but it seems just such a waste to have all these cosmological resources out there going to waste I mean we are talking about conserving energy right then you should switch off the lights when you go outside and all of this stuff that we are talking about but look look out there in the universe there are Sons Illuminating empty rooms vast amount of nagantropy just being flushed down the toilet on a massive scale like at some point it's some civilizations would want to go out and make some better use out of this and I don't know maybe there is some clever reason we haven't thought of why this would be a bad idea from the perspective of all these different values and goals that these civilizations might have but it's a little bit harder to see that I think than to imagine that there could be some such reason for refining from creating ancestors simulations the the elephant in the room is yet another option on the list and that is that everybody goes extinct before you get to this stage where you can create an ancestor simulation do you think that this could be the downside of a Singleton meaning that the extinction of humanity occurs as a result of the Singleton and that that's it and nobody ever gets to the level that you need to or this could go for any any technological trap that sits in the way where there's just some stage in technological development that a civilization hits it causes its own Extinction and cannot ever get close to the power of being able to create an ancestor simulation in other words you die before you get to that point and that's the rule yeah so that's that's like yeah alternative one it would have to be a threat of a particular kind so there are some risks that existential risks that might be pretty severe but are unlikely to affect everybody uniformly throughout the universe so maybe for example in a pessimistic frame of mind you could persuade yourself that humanity is likely to destroy itself because we will develop some very powerful weapon and then the world being what it is we will fight some big war with it and that will be the end of it and maybe that's a fairly likely way for us to come to an end it's unclear however whether that would be a plausible way for the first alternative to be true because that requires there to be kind of close to Universal failure of all civilizations at our stage throughout the Universe to reach technological maturity and you would think with some failure modality like war that there would be at least a few civilizations here and there that that avoided that particular failure like maybe they would already have had one one conqueror who conquered the whole world and so so they wouldn't be fighting in a Wars anymore or or maybe they would have evolved to be more peaceful or they would have created a sufficiently strong United Nations like structure to abolish War like if there are millions of these civilizations you would expect at least a few to have ended up in a better situation with respect to International conflicts so that that would be an example of a risk that might be you know pretty big as a in terms of its probability of being something that will destroy us but still pretty unlikely to be the explanation for why none reaches technological maturity throughout the Universe now what does all right so so you think within philosophy you think about existential crises essentially and put up warning signs of what we might look for as we develop technology and put breaks on it and be careful what we do especially with artificial intelligence do you think we are at the point where we need to make those decisions now and that if we don't we are in grave danger or do you think we have time to change the mindset and start just thinking through our technological development or is it too late well I mean it's not that neither Earth that we are I guess doing a little bit of thinking as suspicious about our technological development as a fraction of global resources though I gotta say it's a pretty small fraction that that is devoted to trying to ensure our own future um we are pretty much Living For The Moment uh by the seat of our pants as they say yeah but on the positive side there's a lot more quality adjusted thinking I would say these days than there were even 20 years ago so this whole field of study I created this future of humanity Institute we were founded in 2005. I've been thinking about these things from before but I remember in in the in the late 90s really what existed in terms of the infrastructure for for trying to think through the implications of future Tech Medical developments these kind of radically transformative technologies that post-axis risk more or less was like a couple of Internet mailing lists some people hanging out there and chatting a little bit and there were there were a couple of Institutions that were dabbling but really very limited now now you have a rapidly growing ecosystem of effective altruists rationalists you have various academic institutes in different places you have specialized research Labs focusing on AI safety Within both some big tech companies and and also as as non-profits you have some big foundations who are funding this stuff and there's a lot of talent flowing into this field AI alignment specifically and that this broader kind of consideration of macro strategy and x-rays more generally still a very small fraction of global GDP but the the growth rate has been significant but yeah I think still we are more more at the stage where one might be thinking about how to nudge things on the margin rather than fantasizing about what kind of the ideal setup would be if one somehow could reform our Global epistemic and governance structures from from the ground up one concept that actually what I think it dates from the 2000s but one concept is the idea of the singularity Ray kurzweil's Singularity how do you think that affects this and what are your thoughts on on the idea of a singularity is it do you think that that's how it's going to go or do you think that that's it's going to be a lot more messy and complicated than that as a singularity I think could be quite messy and complicated if you zoom in and it's it's that the term means different things but in one meaning it just means things happening really fast and so just because things happening very fast on on a calendar time scale doesn't mean that there couldn't be a lot of stuff going on if you sort of zoom in it might just be that all these complex things going on unfold more quickly than we are used to so I do think it's likely that there will be a period of very rapid change coinciding roughly with the development of machine superintelligence I don't think this is certain or known it could also unfold over a somewhat longer period of time but relatively fast takeoff scenarios seem quite likely to me now traditionally there have been other components sort of rolled in to this concept of a singularity that I would want to break out and then analyze separately so some people have Associated The Singularity hypothesis with the claim that there would be some predictability Horizon Beyond which we cannot see the future becomes unknowable it changes so fast it's remade and and then we we have no ability to to extrapolate or predict what happens beyond that I think that's quite a separate claim and would need a lot of qualification and another component that was rolled into this I think particularly by Kurzweil was this notion of exponential growth the idea that you can forecast when the singularity will occur and how it will unfold by plotting a bunch of technological fields on log paper and seeing that you have an exponential there and that that's how things will continue that may or may not be true in the relevant sense I am a little bit skeptical but in any case it's a completely separate and independent claim from the claim that there would be some period in the future where progress is extremely fast and so I think we need to disentangle these three different senses of Singularity and uh and then then we can debate each one on its merits we are a barreling headlong into two technologies over the next few centuries anyway that seem to be infinitely dangerous more more so than nuclear weapons and one is molecular nanotechnology and the other is artificial intelligence and one could even toss in human augmentation to make humans super intelligent because if we do that then we are no longer Homo sapiens we are something else so in other words Homo sapiens goes extinct then you get something new you know at the technological human so what do you how do you view this do you do you think that these existential risks are what's going to end us as opposed to above and beyond things like climate change or the things we talk about right now in the present nuclear weapons whatever do you think that a great filter lies ahead of us with the technologies that are coming and are being developed rather rapidly especially artificial intelligence while if by great filter it means something that possibly could explain the Fermi Paradox that is some radical improbable step that hardly any civilization makes it through not even one in a million then probably not as for super intelligence is concerned it seems that even if we failed in that step what that would result in is on the line super intelligence that is super intolerance that doesn't care about humans or human values that wouldn't explain the Fermi Paradox right because then that super intelligence itself might go out and do the colonizing and render itself visible to other civilizations out there so even if super intelligence were a big existential risk it wouldn't be a great filter in that sense molecular nanotechnology I mean I think that this is a significant source of existential risk it's a little harder to see how it would be an explanation for the Fermi Paradox and the filter in that sense and I mean if for no other reason it's looks feasible to get super intelligence maybe before we get atomically precise manufacturing and nanotech once you have super intelligence you might then like the super intelligence might then see the danger and averted or or or you you achieve super intelligence and five minutes later you have molecular nanotechnology right but that but then then it would be maybe the super intelligence that would govern how that were used and if the super intelligence formed a Singleton and it had enough Savvy to foresee the consequences of certain ways of using the nanotech it could then steer clear of that risk it seems but still yes I think nanotechnology would be a source of some existential risk and synthetic biology too I think we should maybe add to that list of of large existential risks I think depending on how you carve the cake up there might be other ways where you would get big slices I think for example a lot of existential risk arises ultimately from the possibility of conflict the fact that humanity is splintered into different opposing factions with no overarching reliable ways of resolving our Global coordination problems and to produce Global public goods and that manifests itself for example in militaries that we're spending a lot of money developing the capacity to kill the people in the other factions that then develop their own militaries to kill us and if you sort of zoom out and look at Humanity from from the outside that that looks like a kind of tragedy I tried it even if these weapons were never used and and multiply if and when they are used we see global warming like it's a kind of failure to protect Global Commons overfishing we see it in many different ways that there are these costs to not having a reliable methods to resolve our differences particularly at the international level and so if you carve up the risks say risks by accident risks by conflict risks from nature on Independent of of human activity I think the risks from conflict would be a big chunk now you mentioned something interesting adding to the list artificial biology now this is a an existential threat that I don't think very many people think about or know about and it's the idea of creating say something like an artificial plant that out competes all natural plants and the natural plant goes extinct and eventually you end up with a completely artificial ecosystem made up of of Creations in other words custom-made plants because when you can think about genetics you cannot do nature and create Superior Plants essentially this is probably one that we're already dealing with don't you think you know with GMO with type crops and things like that for example corn pollinating all over the place you know I think a cornfield can populate for two miles radius so we're we may already be in that existential problem right yeah and the near the viruses and so forth I think it's the field of systematic qualities advancing so quickly and it's just a large space where creative scientists will find ways of doing all kinds of stuff and some small subset of those ways might be ways to do stuff that's extremely dangerous if we are unlucky existentially dangerous and and right now we don't really have any way to stare around those discoveries I have this in a different paper the vulnerable World hypothesis this metaphor of a great containing balls representing possible inventions different Technologies ideas that we might discover in human history as the process of reaching into this urn and pulling out one ball after another and so far the balls we have extracted have sort of been white balls or maybe various Shades of Gray they have been mostly positive and some technologies have been mixed blessings but we haven't so far pulled out the black ball from the earth like a discovery that is so dangerous that it destroys the civilization that discovers it but if there is a black ball in the urn then it looks like we will eventually pull it out if Science and Technology continues on a broad front and whilst we have become quite good at pulling balls out of the earth we haven't really any capacity to put them back in we can invent things but we can't uninvent things so our strategy such as it is seems to be basically hope that there is no black ball in the earth and that's particularly true in synthetic quality there is not at all the same safety Consciousness and culture as say there is a nuclear physics after Hiroshima and Nagasaki physicists and Regulators realize in their gut that nuclear weapons were a dangerous thing it was not just fun creative science but they needed to be tight oversight and controls the the same is not yet the case for uh in in biology there the ethos is is still putting a lot of emphasis on like democratizing access to science favoring open publication and celebrating creativity and and there is not that gut level awareness that some of what is discovered could turn out to be really dangerous and so I think that that's an area where we need to um yeah level up so in other words always as as what happened at the beginning of the nuclear age always look for the Fallout so before you say you set off the the bomb itself and there was some very Reckless activity in the development of nuclear weapons that that was just uh would not happen today but we only know about it through trial and error so in essence if we pull out the black ball from the urn we don't really have the luxury of trial and error it's either gonna get us or not and the the debate may not happen unless you have it before and you try to identify the black ball right right yeah I mean I I kind of suspect that the level of safety consciousness before and during the Manhattan product it was was greater actually than it is today in synthetic polity that there were people I'm thinking Leo zillard who was the first person to realize the possibility of a nuclear Chain Reaction who tried actively to suppress publication of relevant results with partial success and he was later persuaded to reach out to Einstein for them to gather to write the letter to President Roswell that then led to the instigation of the Manhattan Project only because they fear that Nazi scientists might otherwise develop the bomb and get there first and so again it was this competitive Dynamic that caused these people to override their concerns but in some areas of priority there is not even that it's it's not as if they are driven by this tragic fear of being up against ultimate Evil that they must defeat in the form of a Nazi Germany it's rather that it's kind of fun to discover things and sometimes thinking has not gone much further than that now my last question for you and I'll get to that in a moment but I want to I want to turn everybody's attention towards your home page nickbostrum.com and there will be a link in the description below my last question for you Nick is all right so you think about existential crises things that could happen to us bad things and you as I said earlier you put up wording posts and and tell us that maybe we should think about this before we take a leap which one scares you the most well that maybe is a super intelligence AI risk but it's um it's a double-edged sword well maybe that's not the good category because or a metaphor since the sword is double sorry sort of dangerous on both sides what I mean is that AI is also a source of great hope at the same time as it's a big fear I think if things go well with AI it could also be the solution to a lot of other existential risks and the key that unlocks a much bigger better future for Humanity so I think this transition to this machine in Tabernacle will be associated with great existential risk But ultimately it's a portal that we will need to and and should pass through and our focus should not so much be on whether or not we shouldn't do it there's more how can we put ourselves in the best possible position using the time remaining to maximize the chances that that this goes well all right Nick we are out of time thanks for pairing with us I very much appreciate it and I look forward to reading more of your ideas because they are some of my favorites I like I said I love a good existential crisis and you definitely call the attention to a great many of them well thank you for the conversation [Music] thank you [Music]
Info
Channel: Event Horizon
Views: 122,701
Rating: undefined out of 5
Keywords: fermi paradox, simulation theory, singleton, super intelligence, ai, nick bostrom, event horizon, simulation argument
Id: j7Bfau-fyE8
Channel Id: undefined
Length: 50min 57sec (3057 seconds)
Published: Thu Sep 08 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.