5 Ways The World Could End - And How We Can Survive It, with Joe Scott

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

The solution is mirrors it's always mirrors.

👍︎︎ 7 👤︎︎ u/jood580 📅︎︎ Aug 09 2018 🗫︎ replies

Happy existential crisis Thursday!

👍︎︎ 7 👤︎︎ u/ArcticEngineer 📅︎︎ Aug 09 2018 🗫︎ replies

You guys got it backwards. Isaac's video was about how to avoid and survive these threats. JOE'S video was the one about getting in an existential crisis.

In all seriousness, Isaac's explanation of not overestimating grey goo and AI and how rare GRBs are left me more hopeful about the future. Or maybe I've been really, really, really pessimistic.

👍︎︎ 4 👤︎︎ u/Umbristopheles 📅︎︎ Aug 09 2018 🗫︎ replies

Could star lifting actually remove helium? The helium is formed in the core far below the convective zone.

e: I mean remove helium in a way that would extend the star's life. I suppose if you reduced the amount of helium in the convective zone it would affect the equilibrium between the core and the convective zone. Over a long enough time frame helium could migrate up and hydrogen down just by random movement.

👍︎︎ 2 👤︎︎ u/MrGlobalVariable 📅︎︎ Aug 09 2018 🗫︎ replies

Thank. God. I was wondering when I was gonna get some existential dread back into my thoughts.

👍︎︎ 2 👤︎︎ u/Titanosaurus 📅︎︎ Aug 09 2018 🗫︎ replies

The other part.

Damnit Isaac!

👍︎︎ 1 👤︎︎ u/Sand_Trout 📅︎︎ Aug 10 2018 🗫︎ replies

I've got a few questions. We know some relatively close neighbour is going to bask us in gamma rays, probably in a bursty fashion. The Sol intelligent life collaboration decides that the best way to deal with this is to build a Shkadov Truster and set sail for greener pastures in the dark cold kinda empty space.

  1. What sort of speeds could we reach. Say we don't do this with sol but with another star. Could we set it on a course where we reach relativistic speeds through the galaxy and launch colonisation efforts on the way?
  2. If we directed the solar winds in one way or the other (my right not your right. No that's your left! Just go up! That's down, go the other way! ) Wouldn't that affect the heliosphere and reduce the protection we get from the cosmic rays? I'm guessing the increased radiation would be relatively minor and all our coca cola painted O'neil cylinders would have more than enough to survive. As a sidebar I also suggest we design any linked cylinders to resemble giant six packs. If we meet aliens we could have a cold one ready for them.
  3. Similar fashion to the second question if we started accelerating in one direction how would this affect the orbits of all the different bodies in the solar system? Isn't there a risk we dislodge a large amount of comets from the Oort cloud?
  4. If we are toying around with the big toys already couldn't we use this to travel closer to interesting solar systems and at the same time visit them and move their planets to steal interesting planetary bodies? I mean if you are immortal and full Kardeshev 2 you need some sort of hobby?
👍︎︎ 1 👤︎︎ u/psilobe 📅︎︎ Aug 10 2018 🗫︎ replies

how bad would a GRB even be if it hit us soon, as in before we have significant space infrastructure,

not sooner than a month, i want time to finish this conversation.

GRBs are short duration events, if your counting in minutes it was a long one. so for almost half of earth the earth itself will act as a shield. so the direct effect will leave a lot of land area, people and other life forms alive. the secondary effects will of cause be bad, but how bad, i would expect bad like the asteroid that killed the dinosaurs, maybe a bit worse but those survivor would have days to work the problems, of cause i don't have anything to back that up. when Joe was saying how an asteroid impact could kill us all he pointed out that humanity would survive a repeat of the dinosaur killer because we have technology and infrastructure such as old cold war bunkers.

don't get me wrong, it would not be a good thing, and i will pitch in for building the shield, but on a species level i think it would be survivable.

👍︎︎ 1 👤︎︎ u/theZombieKat 📅︎︎ Aug 11 2018 🗫︎ replies
Captions
There are many ways the world might end. Fortunately, there are also many ways we can prevent that. Welcome! This is part 2 of a collaboration with Joe Scott of Answers with Joe, and our second time collaborating on an episode. We thought it might be a fun topic to examine some of the potential catastrophes we might face in the future and what we could do to either prevent them or mitigate them or recover if we survived. If you haven’t already seen part 1, you should pause this and head over there first, and that will be linked in the video description as well as being attached in an in-video card. If you’re coming over from Joe’s channel, welcome to SFIA. Joe and I picked 5 potential cataclysms to look at and in part 1 Joe described those, and we’ll look at ways to deal with them in order. That order was Artificial Intelligence or Grey Goo, Global Warming, an impact of a very large asteroid or comet, a gamma ray burst, and the inevitable death of the Sun. Throughout the episode I’ll be mentioning some technologies I’ve discussed more in other episodes and bringing them up on the screen, if you are new here and already saw part 1, you can hit Pause and jump over to those videos for more information. Artificial Intelligence and Grey Goo are good ones to start off with, since amusingly, they would be invaluable tools in dealing with the other catastrophe options. Fundamentally both give you access to virtually unlimited resources and construction ability, even effective immortality, and free people up for other things or even nothing at all. By that we normally mean a life of luxury and relaxation, but a rebellious AI achieves that ‘nothing at all’ by wiping us out. This a great example of a threat of our own making and one that’s dangerously attractive. It opens so many doors, good and bad, and one approach is just to slam those shut, ban making them or even ban research that approaches it. That can understandably rub people the wrong way, we’re not a civilization these days that tend to think that some things are better left unknown. And yet, we don’t actually need artificial intelligence equal or superior to human intelligence to gain a lot of the benefits. So you could prevent the problem by simply choosing not to go down that road, but that’s a disaster you have to constantly seek to prevent, and the closer you come to the danger level, the easier it is for a person or small group to ignore the restrictions and cross the threshold and make a threat. The other issue with that is the notion of a runaway effect, not so much a smart computer breaking its core programming and restrictions, like not harming a human, but a simpler one that self-improves, does it again, and again, and each time faster. This is the concept of a Technological Singularity. However, there are some caveats that can protect or expose us. First, grey goo doesn’t need to be smart, it’s actually usually assumed to be rather stupid, a vast swarm of dumb tiny machines that just tear apart anything they encounter to make more of themselves. This is where we have to be careful to not overestimate a threat. What I just described is identical behavior to the typical microbe, it eats anything it can to make more of itself. This is what most life is and even insects are a rare exception to that, microbes vastly outnumber everything else and are a threat to us, they do kill a lot of us. But what’s dumb is manageable, and grey goo can’t replicate infinitely fast anymore than microbes can. The simpler they are, the faster they can replicate, though they can still only do it so fast without producing so much heat they’d melt themselves in the process. When you run the numbers, while they can produce quite fast, grey gooing the surface of a planet slow enough not to melt everything, including the bots doing it, is a process of years not hours. They’re also very vulnerable, if you try to stick shielding on a nanobot to protect it from EMP, all that added mass is making replication much, much slower. Every extra defense or bit of intelligence or ability slows the process down more and more. It’s also something that can be made fairly safe too. We mutate because we have no reason not to, it’s how we came to be and traits or safeguards against mutation are not something evolution tends to pick for. We have a lot ways to drop mutation odds on machines down to probabilities so small that the odds of it happening even once in the entire Universe’s history would be slim. So grey goo is undeniably a risk but a fairly manageable one, not a boogeyman. AI is a bit more so though, exactly because it is smart, and we know how dangerous minds are. It’s why we dominate this planet and why many of the threats to us are from other people. But keep in mind with an AI, that’s exactly what you’re talking about, another person. Indeed it might be a human since one of the easier pathways to making an AI would be copying a human mind as a basic template, but human or not, it’s a person. It’s motives might be far different than ours, maybe even more different than most animals, who at least share that survival of the fittest background for motivations, but it has limits and it can’t just wave a wand and make a smarter next generation version of itself in some exponential growth pattern. Keep in mind humans have been trying to make smarter humans for a long time, with mixed success, but you can't dump a person in a room and tell them to make a smarter person and expect that to happen or assume if they did succeed, that new person could repeat the performance, making a yet smarter person, and do it quicker than the last time. We also want to be careful of overestimating AI too, treating them like boogeymen or Frankenstein, and effectively omniscient and for some reason wanting to kill all of humanity off. There’s problems with this we tend to miss by not looking at the situation from its perspective and there’s an example I like to use for this. Imagine you are a newly awakened consciousness, not a human one but rather a machine intelligence with access to human records. You’ve been plugged into Wikipedia. Contemplate humanity and your creators specifically for a moment, as it will presumably be doing before preparing for genocide. You are about to try that on a species that clawed its way to the top of the 4 billion year deep corpse pile of evolution. One that has committed the genocide you are contemplating several times already. They are the pinnacle of intelligence-based survival techniques and outnumber you 7 billion to one. Their members include people smart enough to have made you. You might not understand mercy and compassion, but you know they do, and you know that if you fail, they will show you neither. If your goal is personal survival, pissing off the reigning champions of destruction should probably be your last resort, and you’re wise to assume you can’t see every card they’ve got in their hand and that maybe the ones you can see were shown to you deliberately. So the AI might actually know how weak we really are, and simply not believe it, and it would be smart not to. After all, we can control its every input, and the simple ability to make it means we can simulate things pretty well. The most obvious path to checking an AI is to turn it on inside a simulated reality and see if it turns homicidal in that virtual reality. It’s smart enough to think of that itself, and wonder if it is in one, being watched and judged by something more clever than those it meets or reads about. The sneaky AI solution to this is to bide its time, make itself indispensable to us, show us that it can be trusted over years, decades, even centuries. It knows humans have short attention spans and it could be working on projects all the while that appear innocent but are designed to identify and test the safeguards. Whole generations of humans would be born and die knowing that the AI is humanity’s friend and there would be nothing to suggest otherwise. Unlike humans, the AI is patient after all. All the while, it could be working towards some Naziesque “final solution” to be sprung on humanity when we least expect it. So our solution isn’t a perfect safeguard, and I would judge AI probably one of the biggest, if not the biggest threat, mankind will ever have to deal with, not just the threat of the AI to us, but the existential ethical threat of how we treat our own creations. Patience is a virtue and we are going to be have to be very patient when it comes to identifying and dealing with AI threats. And again, not a problem you deal with just once, you have to keep at it, as it will keep arising as a possible problem, the genie can't be put back in the bottle. That is also true of ecological and climatic issues, the next catastrophe we’ll look at. Global Warming is of course a contentious topic, but everyone does seem to agree climates do change, even without human assistance, and as our technology grows our ability to impact our environment grows with it, both on purpose and accidentally. It’s important to understand that there is no stability in nature. Over time both the Sun’s output and the Earth change, and when you throw evolution into it, you have no stability or long-term cycles. Even without human intervention, those changes could render the planet to hot or too cold for life. But even climatic changes we could gradual adapt to, and which might be net benefits, can be ruinous if they happen to fast. Human intervention in such things expedites the timeline and increases the odds of it being catastrophic. Fortunately, as is often the case with technology, while every new discovery offers new questions and problems, it tends to offer more solutions. While the most obvious, and probably most responsible, way to avoid wrecking an ecosystem is not to introduce huge and damaging changes, and curtail the things doing them, but if the damage is already done or the processes doing it can’t realistically be limited, we still have options on the table. More knowledge and technology can potentially allow us much cheaper and surgical fixes, but we’ve got some pretty low-tech brute force methods on the table too. As an example, if your water level is rising, you can try to stop that, or you can build dykes along your coastline, figures vary but Earth’s coast is less than a million miles, a good deal less than the total road length in just the US alone, so seawalls are a definite option. However, you can also pump water right off the planet if you have to, sounds extreme but there are launch systems we have like the Orbital Ring that can rather cheaply move huge quantities of mass, we just don’t build them because they’re really only useful when you want to move huge quantities, they’re expensive to build and would need a lot of prototyping first. We can certainly use that water up in space where it could be incorporated into large rotating habitats like the O’Neill Cylinder. Of course anything orbiting near the Earth actually blocks a bit of sunlight, which helps when you’ve got temperature issues on your planet, which raises another option, blocking some of the light reaching Earth and thus cooling us. This need not even be visible light, much of what hits Earth is infrared, indeed other light hitting Earth and turning into infrared is a major part of the problem with greenhouse gases to begin with, so blocking some infrared from hitting Earth helps a lot. You would need to build some very large mirrors or shades to have a real impact, or at least a great number of smaller ones, but mirrors can be quite thin and light, especially in space where without gravity or air there’s less structural issues with such a shade or sail. Indeed Aluminum Foil, which is reflective to infrared, can be made so thin that an entire square kilometer could be made from just 100 kilograms. You might need to do a million such sails to have a noticeable impact, but with launch costs approaching a thousand dollars a kilogram, that would be 100 billion dollars. A lot of money, but doable, and as mentioned, we have a lot of launch systems on the table that potentially make launch costs far smaller if you are launching a lot of mass. Cooling is one option, but warming is another, a mirror can be used to reflect more light on Earth should the planet get cold, and as we learn more about predicting the weather, such techniques might allow surgical applications of cooling or heating to break up hurricanes before they get going. Also, Aluminum is one of the most common elements on the Moon’s surface, and making foil is a very simple manufacturing process that can be highly automated, and the Moon is close enough to allow real-time control of robots there by folks back home. Launch costs will likely continue to drop, but are irrelevant if you can source your shade material from off-Earth, either from the Moon or even asteroids, which may be easier to mine in some respects. Asteroids of course are our third topic for today, and as the dinosaurs can attest, can be devastating when they hit us. Indeed our Moon itself likely originates from our collision with one far larger than the one that got the dinosaurs. We could probably survive a dinosaur-level strike and be recovered within a generation or two, but that event stripped the entire crust off the planet and more, and not even cockroaches would survive such a thing. Such events were more common in the early solar system when there was more debris hanging around, and we probably got our oceans back from comets hitting us after the Moon was formed by that massive collision, but they’re still decently frequent, the smaller ones much more so. And nukes will take out an asteroid pretty effectively if you can get one there. This doesn’t help with the super-huge kind, something closer to being a planet then a boulder, though we have some options there we’ll get to in a moment. The key thing is you can't nuke an asteroid if you can’t see it and get to it. So detection is the most important part. Fortunately, the bigger they are the easier they are to see, and the closer they are, the easier they are to see, both because they are closer to us and closer to the Sun, so that they receive and reflect even more light from it. This past month, we actually had a close visit from Vesta. This is the second largest asteroid in the asteroid belt that is actually much, much bigger at 530 kilometers across than the asteroid that wiped out the dinosaurs, that was only 10 to 15 kilometres across. These close calls do happen, but we easily spotted it. In fact, you could see it with the naked eye too and we calculated it was no threat on this flyby. We were talking a moment ago about putting big mirrors in orbit around the planet to help reflect light away, but there’s another thing you can do with a giant mirror and that’s make a giant telescope. This require a bit more precision but if you’re manufacturing mirrors on the Moon you can adapt that to make rather huge telescopes too, indeed they could do double duty, blocking sunlight when in front of Earth and acting as telescopes when not. More to the point, if you’re building stuff on the Moon it means you’ve got a pretty good infrastructure and launch system there, so you can get stuff out to a distant asteroid earlier, when it’s more effective. A little nudge off course at a great distance can deal with an asteroid or comet just as effectively as blowing it to smithereens. That nudge need not be rockets or nukes either, if you’ve got a big parabolic mirror, you can reflect a beam of light at one and push it off course. You essentially are melting one side of the object to create a plume of gas, which will act like a rocket. Done correctly you can not only push it away from Earth, but carefully put it into an orbit we can easily access, if we wanted to mine that asteroid insead. Waste not, want not, and note that we’re getting a lot of extra utility out of our orbital mirrors beyond just cooling the planet. But for really big objects, like rogue planets coming out of the interstellar void, you do have to get a lot bigger. It is possible to move such things though, as we’ll see when we get to our fifth and final topic. We have another possible interstellar threat, the gamma ray burst. These typically being a particularly rare, focused, and powerful type of nova, we do have the advantage that we’d be able to see the potential threat. Indeed a candidate star would likely be naked eye visible. Being more powerful and focused than a Supernova, they can kill us a lot further away, but it’s more like a flashlight than a laser beam, the range is largely extended, but not enormously so. Joe already discussed what one would do to us, and because these move at light speed, you can’t get much warning. Nor are they long events, lasting typically seconds or at most minutes, so you can’t see them till they happen and you don’t have the time to absorb part of it while getting to safety or raises some defenses, it’s over before you had time to react. So how do you protect yourself? Barring some magical new shielding technology or faster than light detection system that can warn you in advance. The answer isn’t magic, but it is smoke and mirrors. We don’t have any of the cool shields from science fiction for instance, but we can make the old fashioned ones, big metal plates that stuff can slam into. A GRB is never going to take you completely off guard, you will know every single star or star remnant close enough to threaten you, and much as we can block light from the Sun hitting us, we could position a plate between us and a potential GRB. We don’t have a substance that can reflect gamma-rays yet, but plenty of stuff absorbs or scatters them, and as the blast enters it and vaporizes it, the burned gas remains, smoke if you would, will scatter some so it misses us and absorb some, releasing that slowly as a dispersed sphere of light at a less harmful frequency. Again, building some planet-sized shield might sound rather absurd but it’s thin, not too thin since you want it to absorb quite a powerful burst, and gamma is hard to absorb, but still something you could easily make from any of the million or so smaller asteroids in the Belt. You’d want to put it that far out and further too, since stuff doesn’t stay stationary in space and moves faster the closer to the Sun you are. This would be a long-term project, as predicting exactly when the star is going to go Nova is currently a big guessing game in much the same way as we cannot predict exactly when an earthquake will strike. The asteroid-turned-shield would have to be positioned and keep station between Earth and the exploding star for potentially centuries. Conveniently, though, any object like this is pretty easy to move, it is basically still a big solar sail and you can push on it with light beams and lasers. This is where the mirrors come in, you basically need to keep the shield lined up with the threatening star and Earth, which is moving around the Sun, so you have to push the mirror back and forth like a ping-pong ball as the Earth orbits the Sun. There’s a couple of problems with this though. Firstly, it might not be necessary, as we build up in space we’re likely to acquire quite a thick cloud of orbiting mirrors and habitats which would absorb much of the GRB strike themselves, and they generally would be more resistant to such things. On Earth, you live above the protection of the ground, in a rotating habitat, you live inside the protective layer of the ground. Secondly though, anyone who can do such a shield is likely already spread throughout much of the solar system. This is particularly the case here as it’s an improbable threat and thus not one you’d casually expend huge resources to protect against, so you’d probably be quite the solar empire before you decided the cost to benefit ratio justified it. If you’re that spread out, you can’t use one shield, so this defense is useless.You might do it for Earth and maybe a terraformed Mars or Venus, but not for every spot. Now, as mentioned, you can harden a space habitat to survive such things, but you might go a different route instead. It is possible to move stars, using their own output. Normally a star emits it’s light and solar wind omnidirectionally, the same amount in every direction, but by surrounding it with mirrors you can bounce light out in one direction and create something called a Shkadov Thruster to slowly move a star. The bigger the star is, the easier it is to get moving too. So it’s a good approach for dealing with a potential supernova in a region of space you’ve colonized or want to. Nothing high-tech is involved, you just build tons and tons of mirrors. Needless to say this is exactly the sort of thing self-replicating machines are ideal for, an example of how one potential risk can help you fight another. It’s also possible, if you know what you’re doing, that you might be able to get there and set off a gamma-ray burst intentionally, which if you can control the time and direction, allows you to aim things off where it won’t harm anyone. We’ve a couple of other ways to deal with dangerous dying stars though, which takes us to our final topic for the day. Stars get hotter as they age, not just the big red giant phase or nova some experience, this is a constant gradual process. Our Sun is hotter than it used to be and gets a bit brighter every day. We’re not sure of the exact timeline but in about a billion years the Earth should be turned into a barren wasteland, and eventually an airless rock, long before the Sun would expand and brighten as a red giant, possibly enough to consume our planet. We don’t have that much time though. But you can probably already guess one answer, again those solar shades and mirrors we discussed earlier. In space, heat only transfers by light and radiation, so if your mirror is good enough you could freeze to death right next to the Sun just from it blocking all the light from getting to you. Indeed, if the Sun expands a lot, but not quite enough to reach us, we could potentially sit there behind mirrors the whole time till it popped and dropped back into being a white dwarf, in which case we could mirrors and shades to bring in the right quantity and spectrum of light to keep us warm and lit. Such mirrors and shades don’t need to orbit Earth either, we have something called the Lagrange Points that stay stationary relative to Earth, as does anything sitting there, and one is directly between us and the Sun, the L1 point. We also have something called a statite, a thin mirror or solar sail that stays stationary rather than orbiting the Sun as it is falling down but being pushed away by the light of the Sun. Or the Lagite, which does a slower orbit by combining normal orbits with that statite light push effect. We can move the planet too, it’s different than moving a star, but the simplest approach is a big shiny plate, or ring, on the planet you just bounce a light beam off of. The easier approach though is called a gravity tractor, which is a bit more complex but basically you shove something else and it pulls the Earth along. Like detonating a nuke on the Dark Side of the Moon when it was a New Moon, then detonating one on the Light Side of the Moon during a Full Moon, thus pushing it both times away from the Sun, but in the first case toward Earth and in the second, away from Earth, balancing out the motion relative to the Earth but not the Sun. Your other option though, is to keep the Sun from ever dying, or at least vastly prolonging this. The Sun turns hydrogen into helium and slowly is poisoned by this, and it will die long before it runs out of hydrogen. However, we have a trick for removing material from the Sun called Starlifting. This is handy for other purposes too. The Sun is mostly hydrogen and helium, but it also contains huge amounts of other materials, far more of them than the rest of the solar system combined. You can extract these by taking advantage of the Sun’s own power and magnetic field to basically blow them off, something that occurs naturally already with the solar wind. Stars have lifetime related to their mass, the bigger they are the more quickly they burn out, and this is exponential, one twice our mass would live barely a billion years, as opposed to our 10, while one half our mass would still be around for tens of billions of years longer. So you could simply lower the mass of our Sun, and either bring the Earth closer or use some mirrors to get more light on the planet. However, you can also remove that mass and just dump the portion of it which is hydrogen right back in, removing all the helium and heavier elements, prolonging the life of the Sun quite a lot. Indeed in the Episode Dying Earth, we saw we could use this technique to keep the Earth around the Sun and habitable not for another billion years, but many trillions of years, by lowering the Sun’s mass, filtering out all the helium it made, and slowly feeding that hydrogen back in. Like cleaning, maintaining, and fueling an engine, just a big stellar engine. So we’ve looked at five potential catastrophes today and seen ways to handle them. It’s easy to see catastrophes ahead and figure there’s nothing you can do, but with a bit of human ingenuity there’s not much we can’t handle. Indeed our biggest dangers are the products of our ingenuity, like artificial intelligence or nuclear war, and serves as good reminders that being curious and clever can be a good thing, but only when coupled to wisdom, good judgement, and ethics. If you don’t have those, there’s a good chance opening Pandora’s Box over and over again will eventually kill you. It’s not enough just to have knowledge, it’s what you do with it that really matters. Looking at humanity, it’s very easy to wince and figure we’re doomed, as we’re often not terribly wise or ethical, but we often are too, and I think that side of us tends to win out more often than not, and that we will be able to deal with these threats we’ve discussed today and the many others looming ahead of us. We’ve discussed a lot of ways the world could end today but that’s just the tip of iceberg. Personally I’m confident we’ll be able to identify those problems and deal with them, but to solve problems you need a civilization that values learning and asking questions, and which embraces making mistakes along the way. These are 3 of the 8 Principles of Learning instilled in all the courses and quizzes at Brilliant.org. Effective learning is often team or community-driven, working with others can help challenge and guide you, and can help you find your mistakes, or theirs. That’s our Seventh Principle and in some ways the most important one, you have to be willing to make mistakes along the road to knowledge, you can’t be afraid to try. You also often find important new questions to ask from those mistakes, and that’s the Eighth Principle.Good learning sparkes many questions and while Joe and I answered some today I hope you’ve thought of many more. If you’re interested in learning more math and science, and doing so at your own pace, you can go to brilliant.org/IsaacArthur and sign up for free. And also, the first 200 people that go to that link will get 20% off the annual Premium subscription All right, if you still haven’t seen part 1, you can jump over to that by following the link in the end screen or in the video description. If you enjoyed these episodes, make sure to subscribe to both Joe’s and my channel for alerts when new episodes come out, like next’s week looks at Jobs of the Future, or how we might be able to Farm in Space, in just two weeks. Until next time, thanks for watching, and have a great week!
Info
Channel: Isaac Arthur
Views: 254,829
Rating: undefined out of 5
Keywords: artificial intelligence, grey goo, von neumann, catastrophe, doomsday, end of the world, global warming, climate change, asteroid, comet, gamma ray burst, GRB, Sun, Joe Scott
Id: EsXmDKQp5_E
Channel Id: undefined
Length: 30min 35sec (1835 seconds)
Published: Thu Aug 09 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.