This $8 Trillion Coronavirus Mistake Could Kill 100%, w Stephen Fry. AI Is Watching.

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

nice

👍︎︎ 1 👤︎︎ u/lerobinbot 📅︎︎ Oct 03 2020 🗫︎ replies
👍︎︎ 1 👤︎︎ u/jeremiahthedamned 📅︎︎ Oct 04 2020 🗫︎ replies

Wuhan institute of virology.

👍︎︎ 1 👤︎︎ u/RandomOpponent4 📅︎︎ Oct 03 2020 🗫︎ replies
Captions
existential risks have the potential to eliminate all of humanity or kill large swathes of the global population leaving survivors unable to rebuild modern society some of these disasters are likely to happen in our lifetime or our children's lifetime one of the greatest risks is a pandemic far worse than coronavirus the number of labs working with dangerous pathogens has quickly grown into the thousands and there have been many accidental releases smallpox killed over 300 million people in the century before it was eradicated in 1978 it escaped from a british lab claiming a final victim and more recently in the u.s abandoned vials of smallpox were found in a cardboard box during an office move researchers are planning to make avian flu with a mortality rate of 60 percent more transmissible among humans when a strain of avian flu made the leap in 1918 it killed around 50 million people the u.s government paused the research after warnings from hundreds of scientists and safety failures at many labs but last year the research was allowed to resume the aim is to predict how the virus might mutate and create vaccine stockpiles but professor salzberg points out that influenza mutates while circulating among millions of birds and we don't even stockpile seasonal flu because it mutates too fast virus-enhancing research like this has been likened to the discovery of nuclear power and nuclear weapons a study calculated the risk of a dangerous pathogen escaping a lab and causing a pandemic it found that for a fairly contagious pathogen the chance of an undetected escape could be as high as 15 we found that more than 100 labs experimenting with potential bioterror agents have faced enforcement actions for serious safety violations since 2003 many private labs are not monitored for mistakes and nobody knows how many there are the cost to chemically manufacture strands of dna is falling rapidly and it may even become affordable to print deadly pathogens at home biology could also be weaponized by terrorists or states north korea is thought to have assembled an arsenal containing anthrax botulism smallpox and typhoid at the same time regulations are being removed from a threat which brought the world to the brink of catastrophe the us and russia have over 12 000 nuclear warheads between them and a key treaty was suspended in 2019 ending annual on-site verifications studies suggest that nuclear war between russia and the us would plunge the northern hemisphere into darkness within a week and the southern hemisphere a week later the models which use data from forest fires volcanic eruptions and previous nuclear bomb detonations project a decade-long nuclear winter 150 megatons of soot would be released by the nuclear explosions and ensuing fires blocking the sun temperatures and precipitation would drop dramatically and agriculture would be devastated while most people would survive the initial nuclear war it's unclear whether humans would survive 10 years of cold temperatures with minimal food many scientists argue that mutual assured destruction should be renamed self-assured destruction as a major nuclear attack would mean self-destruction even without retaliation yet the superpowers plan to spend over a trillion dollars upgrading their nuclear arsenals the one thing i convinced myself of after all these years of exposure to the use of nuclear weapons is that they were useless they could not be used so you can have deterrence with an even lower number of weapons well then why stop there why not continue on why not get rid of them all together there have been many serious accidents when a plane broke apart over north carolina dropping two nuclear bombs five of the six safety devices failed what prevented the detonation was one switch and a fair amount of good luck because that safety switch was later found in some cases to be defective had the device exploded the fallout could have stretched across the eastern seaboard in 1983 computers at russia's nuclear early warning center reported incoming u.s missiles stanislav petrov was supposed immediately to notify his superiors initiating a nuclear counter attack the siren sounded very loudly and i just sat there for a few seconds staring at the screen with the word launch displayed in bold red letters a minute later this siren went off again the second missile was launched then the third and the fourth and the fifth the computers changed their alerts from launch to missile strike we knew that every second of delay took away valuable time i made my decision i would not trust the computer i reported that the alarm was false but i myself was not sure picture this four air force officers who hold the launch keys to nuclear missiles leaving open the blast door that's supposed to prevent terrorists from entering the capsule this while another slept inside which is allowed only if the door is closed and who discovered this in one case it was a maintenance team in another case it was discovered by someone delivering food the next existential threat resists isolation experts agree that any attempts to contain it may even accelerate our downfall advanced ai is unlikely to look or think like humans when you're working with ai is less like working with another human and a lot more like working with some kind of weird force of nature in rafiq anadal's stunning work ai explores data from the environment and the human brain from thousands of perspectives how will it see us it may be indifferent to us but brutally efficient if we get in its way once you get machines that are better than us at that narrow task of building ai then future ais can be built by not human engineers but by machines except they might do it thousands or million times faster nick bostrom anticipates an intelligence explosion with ai becoming vastly more intelligent than humans if it had vaguely human-like ethics it may decide to contain us or remove our threat to each other and the world so i wrote a book about this uh which kind of discussed had a big impact on some people in trying to help us ai could be surprisingly dangerous asked to optimize our happiness it might feed us a powerful cocktail of drugs or tasked with a geoengineering project it might destroy our ecosystem in the process and view our attempts to stop it as a threat how do you convince einstein to care so deeply about earthworms that he dedicates his immortal existence to caring about each and every last one of them superintelligence might help us eradicate war disease and poverty if it's born from goals aligned with ours i think a lot of people dismiss this kind of talk of super intelligence as science fiction because we're stuck in this sort of carbon chauvinism idea that intelligence can only exist in biological organisms made of cells and carbon atoms and as a physicist from my perspective intelligence is just kind of information processing performed by elementary particles moving around you know according to the laws of physics and there's absolutely no law of physics that says that you can't do that in ways that are much more intelligent than humans and it doesn't need to be conscious or super intelligent to be dangerous i wanted the ai to invent new paint colors like the ones here on the left and here's what the ai actually came up with doing what we say but not what we mean could lead to strange and disastrous outcomes ai is far more dangerous than nukes far so why do we have no regulatory oversight armed robots autonomous weapons are already in development to avoid hacking the weapons could be cut off from communication and conflicts could run out of control the risk is present with today's technology and will grow as a.i advances a new arms race could lead to catastrophic ai wars in the u.s perspective there is nothing intrinsically valuable about manually operating a weapons system as opposed to operating it with an autonomous function the united states isn't alone the countries working hardest to build autonomous weapons insist we can't regulate what doesn't exist yet the nuclear ai and biological risks are all intensified by another threat by 2050 55 of the world's population could experience more than 20 days of lethal heat per year this could lead to over a billion people being displaced rising sea levels could cause people to abandon parts of mumbai jakarta guangzhou hong kong shanghai bangkok and manila among other cities food production would drop with deserted farms and chronic water shortages causing food prices to rise dramatically it's also a recipe for isolationism increasing the risk of nuclear conflict and pushing countries to develop dangerous technologies and it creates breeding grounds for deadly diseases the coronavirus pandemic may bring existential risks into focus petrov described his decision not to launch a nuclear strike as 50 50. and professor hanega puts the probability of another flu pandemic at 100 but we can only change course and avoid the risks if leaders wake up and see them coming every human life for the rest of time quietly hangs in the balance please leave a comment to support action on existential risks it would be a fitting and beautiful legacy for our global efforts during the pandemic
Info
Channel: Pindex
Views: 1,918,515
Rating: 4.796814 out of 5
Keywords: coronavirus, ai, artificial intelligence, nuclear weapons, pindex, stephen fry, facts, covid-19, existential risks, existential threats
Id: PvIBzzo_3AY
Channel Id: undefined
Length: 12min 34sec (754 seconds)
Published: Thu Sep 24 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.