The end of humanity: Nick Bostrom at TEDxOxford

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Not a "TED talk", but a "TEDx talk". Very different animal.

👍︎︎ 21 👤︎︎ u/[deleted] 📅︎︎ Dec 19 2019 🗫︎ replies

This video is a general explanation of what existential risk is.

The speaker uses individual events/discoveries as hypothetical examples for existential threats, but he doesn't go into detail. There is no mention of systematic conditions or events/discoveries that that could become existential threats over longer periods of time.

For anyone interested in the topic, I'd recommend browsing wikipedia over watching this video from 2013.

👍︎︎ 9 👤︎︎ u/Xzerosquables 📅︎︎ Dec 19 2019 🗫︎ replies

Ah, the halcyon days of 2013...

👍︎︎ 7 👤︎︎ u/[deleted] 📅︎︎ Dec 19 2019 🗫︎ replies
Captions
I want to talk to you today about existential risk the first slide I will show will depict some of the greatest catastrophes of the last century and so the squeamish among you might want to look away this shows the net effect of the two world wars the Stalinist purchased the Holocaust the random genocide the Spanish flu as you can see in statistical terms they don't even swap the total number of human lives that were lived on this planet hasn't really been much affected by even these worst disasters that we have experienced if one wants to consider some event that would actually show up in a graph like this we have to go back further say to the Middle Ages where something like the Black Death would have made a dent in this kind of population graph but even that kind of catastrophe is not what I want to talk to about today existential risk is something different there is this philosopher here at Oxford who wrote a book back in 1984 called recent some persons and he had this simple thought experiment that helps bring out what is at stake here he asked us to consider three different scenarios so one is that nothing happens there's peace things continue as normal another is that there is a nuclear war that kills 99% of the world's existing population and the third scenario is that there's a nuclear war that kills everybody now if we are asked which one of these would we prefer obviously we prefer a and if I had to choose between B and C we would say B is really horrible but C is even worse so the rank order here is pretty clear a is better than B which is better than C but then Parfitt asked us to consider a different question considered how big is the difference between these different scenarios now if asked how big the difference is in terms of the number of people that are killed then it's clear that the difference between see and B is much smaller than the difference between B and a the difference between a and B is about in today's terms almost 7 billion people where is the difference between B and C is just one hundredth of that so 70 million people however it's a different question that is more relevant to our decision-making which is how big is the difference in the badness of these three different scenarios and here the order is reversed perfect argues that the difference in how bad sees and how bad B's is far greater than the difference in how bad B's and how bad a is because if C comes to pass if 100 people 100 percent of everybody dies then it's not just that there is a massive number of people who are killed but it's also that the entire future is destroyed if B occurs by contrast you might eventually climb back up and you might have as many people living in the future as would have lived anyway so this goes to the heart of why I think existential risk is a particularly important and relevant category to consider here is another way to bring it out we can consider different types of catastrophe and draw two axes here one on the y axis here the catastrophe scope how many people are affected it could range from a personal catastrophe something that affects one person up to a local global or a transgenerational a pan generational something that affects not just the current existing people with all generations to come and on the other axis we can plot severity how how bad the effect that is each affected person so in the lower left corner here we might have an imperceptible personal risk like the loss of one here it's a very small harm I've suffered a lot of those harms in recent years but and then as we go up towards the right and up in the diagram we get increasingly severe catastrophes and we could delineate roughly a class of global catastrophic risks which are ones that are at least global in scope and at least from durable intensity but up there in the upper right corner we have the special category of existential risk so an existential risk is one that would have crushing severity which means death or something in the ballpark of being as bad as that something that radically destroys the potential for a good life like maybe severe permanent brain injury or a lifetime imprisonment that kind of thing and pan generational in scope that is affecting all generations to come so we can define an existential risk as one that threatens the premature extinction of Earth originating intelligent life or the permanent and drastic destruction of its potential for desirable future development let's consider the values that are at stake if we are discussing these kinds of risk it's possible that the earth might in a good case now I remain habitable for at least another billion years suppose that one billion people could live sustainably on this planet for that period of time and that a normal human life is say a hundred years that means that 10 to the power of 16 human lives of normal duration could believe it on this planet if we avoid existential catastrophe that has the implication that the expected value this is when you multiply the value with the probability the expected value of reducing existential risk by mere one millionth of one percentage point this is such a small reduction that it's unnoticeable but reducing a substantial risk by mere one millionth of one percentage point is at least a hundred times the value of a million human lives so if you're thinking about how to actually do some good in the world then there are many things you could do you could try to cure cancer or as dig wells in Africa but if you could reduce existential risk by mere one millionth of one percentage point arguably on this line of reasoning it's worth more than 100 times the value of saving a million human lives so this is mind-boggling now the values can get even bigger if we really work to survive for a very long time maybe we'll develop more advanced technologies maybe our descendants will one day colonize the galaxy and beyond maybe they can find different ways of implementing mines and computers and so forth and if one runs some calculations they're using less conservative assumptions a much much larger number of possible lives could result in our future if everything goes well this would make the expected value of reducing existential risk vastly greater than than this this estimate there this suggests that one might simplify action that is motivated by altruistic concern that is if you really want to make the world better insofar as you're acting to do that you can simplify a decision problem by adopting this max support rule which is to maximize the probability of an okay outcome where an OK outcome is any that avoids an existential catastrophe because any other good effects and there are other good effects like helping people here now even if it doesn't effect existential risk but the expected value of those actions will be trivial compared to even a slightest reduction in existential risk on this line of argument this is not reflected in the current priorities of academic research where we see that human extinction is a rather neglected area there is more research on SYNC oxalate than there is a human extinction more on snowboarding than there is some sinc oxalate and much more on the dung beetle than there is on all of the others combined I think we have this sense sometimes that it's too big too important and and too enormous for really to sort of fall within the the microscopic lengths of academic research perhaps there might be other explanations as well but there seems to be a dissipation of attention attention is not always directed to what is most deserving of attention and is most important now maybe this could be defensible if the probability was so negligible that even though the values at stake would be enormous if it just can't happen then there would be no reason perhaps to worry about it this doesn't seem to be so it's difficult or impossible rigorously to assign a particular probability to the net level of existential risk so in this century but people have looked at the question who have written books are and aspects of this typically assign a substantial probability we had a conference here a couple of years ago in Oxford where we brought together experts in different risk areas from around the world and at the end of that we made an informal poll and the median answer to how likely do you think it is that humanity will be extent extinct by the end of this century Mongo's group of experts were 19% so it's roughly in line with what other people have said you have written about this now it might be more it might be much less but either way it seems that we do not have any solid evidence that would enable us to assign say LS then a 1% chance of this happening in the next century and of course if we consider a longer timescales then the probability increases like what we currently take to be the normal human condition is really a hugely anomalous conditioning in space like Earth is this very rare crumble most of it is just vacuum inhospitable to life and in time the modern human condition is very unusual on geological evolutionary even historical timescales and the longer the period in the future we consider the greater the chance that humanity will break out of this human condition either downwards by going extinct or upwards by may be developing into some kind of post human condition now what are the major existential risks well we have only limited time here so we can't go into all details but there are different ways in which you could classify or carve up the spectrum of existential risk here is one way now notice that human extinction is one kind of existential risk it's not the only one remember that an existential risk was defined as one that threatened to destroy our entire future including our potential for desirable development so another type of existential risk would be permanent stagnation another would be thought realization where we do develop all the technological capabilities that we could develop but then we fail to use them for any worthwhile purpose and you could also consider a fourth category where we sort of initially develop all the technologies and initially use them for good but then something goes wrong so it's worth bearing in mind that in addition to extinction there are these other ways in which we could sort of permanently lock ourselves into some radically suboptimal state and that might be in the ballpark of as bad as extinction now there are other ways in which you can sort of carve up the spectrum of existential risk you could begin to look at particular risks maybe particular risks from technology you could say well what about bioengineer weapons what about nanotechnology what about artificial intelligence and that that can be informative for certain purposes and I just want to highlight one type of risk here in the interest of saving time consider this model for how humanity behaves we have a big urn of possible ideas possible inventions possible discoveries and we put our hand in this idea by doing research and experimenting and being creative and we pull out new ideas and try new things in the world and so far we've made many discoveries we've invented many technologies and none has killed us yet most of them seems to have been pretty good like they were white balls here and some have been mixed nuclear weapons technology for example has been perhaps a dark shade of grey but so far we have never extracted from this urn a black ball say an invention such that it would for example make it possible for an individual to destroy humanity like suppose that nuclear weapons which are quite destructive but really hard to make you got to have these highly oriented rhenium ore or plutonium these very difficult to get resources big industrial facilities to make this it's really hard but before we had discovered nuclear weapons how could you be sure that there wasn't a simpler way of doing this like baking sand in the microwave oven or something like that like that just could make some destructive capability available so obviously nuclear weapons don't work like that but if we keep making these inventions maybe eventually we will stumble on one of these black balls a discovery that makes it easy to wield enormous destructive power even for individuals with few resources and once we've made a discovery we don't currently seem to have the ability to undiscovered it we we don't have any way of putting the ball back into the urn currently so one class of existential risk is of this type that because we have very weak global coordination because we can't sort of uninvent things we have invented if we keep pulling balls out maybe eventually we will be unlucky and discover something really destructive there so what I would suggest is that rather than thinking of sustainability as an ideal that involves some kind of stasis that is rather than aiming toward some kind of condition which is sustainable in the sense that we could then be in that condition for a very long time we should perhaps think instead of a dynamic notion of sustainability where the goal is to get on to a trajectory that is sustainable a trajectory on which we continue to travel for a very long time or indefinitely so to use the metaphor a consider a rocket that has been launched and it's now in midair now suppose we want to make this rocket more sustainable well what could you do well one thing you could do is to say reduce the fuel consumption in the rocket so that it goes slower and in that case it could hover in the air for a bit longer but in the end it's going to crash down the other thing we could do is to keep the engines roaring and maybe try to achieve escape velocity and once we're out in space then the rocket can go indefinitely but in this second strategy you would actually temporarily decrease sustainability like you burn fuel at a faster rate but in order to be on a more sustainable trajectory and it might be that humanity needs to think similarly in terms of some trajectory that might involve at some point taking more risks in the short term in order to reduce risk in the long term this graph here suggests that one might think in terms of three different axis where we have say technology on one axis we want more of that ultimately insight on another axis we also want more of that and coordination we want to be able better to collaborate and cooperate and ultimately we want to have maybe the Simo of all of this this is the way to realize humanity's potential in the long run but that's really is open the question of in the short time is it always better to have more technology or more coordination or more insight maybe you need to get more of one before you get more of the other maybe you need to have a certain level of global coordination before you invents a really powerful new weapons technologies or our dangerous discoveries in synthetic biology or in nanotechnology or something like that so you might not wonder what can we actually do to reduce existential risk and well that's that's of course a topic for another day but I think that even getting to the point where we start to seriously ask ourselves that question is an excellent way to start
Info
Channel: TEDx Talks
Views: 442,594
Rating: 4.5543981 out of 5
Keywords: TEDxOxford, tedx talk, English, Risk, ted x, United Kingdom (Country), ted talks, Analysis, tedx, UK, ted, Existential, Oxford, ted talk, tedx talks
Id: P0Nf3TcMiHo
Channel Id: undefined
Length: 16min 34sec (994 seconds)
Published: Tue Mar 26 2013
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.