Hi! I'm Dave from Boyinabad This is a little girl eating some cake. This is a cat by a laptop. This is a large airplane on a runway. Those three sentences were defined by a a computer analyzing these images, not me. Today I'm going to talk about the near future. More specifically: What effect will artificial intelligence actually have on our lives? Because recently, I came across something which was one of the only times in recent memory that my mind was at first blown, and then completely changed. First lets clear something up real quick: AI is NOT big scary robots. It's not like what alot of movies imply but it IS real and it IS powerful. People tend to be skeptical of how powerful There was a point where any self-respecting computer scientist would of been like: [Stereotypical British accent] 'A computer could never understand human speech. Its too complicated!' And yet, "Ok Google." [bloop] "What's the weather in England like tomorrow?" Google: "Tomorrow's forecast for London is 6 degrees with showers" Bloody English weather... Because it's getting so intelligent I figured eventually it would help us solve little problems like: Having to do work we don't enjoy, or Keeping us entertained, or How to not die Now there were naysayers that said that kind of thing wasn't gonna happen Who I ignored because, y'know... human confirmation bias. Until I put on my glasses and saw who those naysayers were Steven Hawking, Bill Gates, and Elon Musk. Bill Gates: "How can they not see ... what a- a HUGE challenge this is?" Elon Musk: "I'm not against the advancement of AI ... but, I- I do think, uh, we should be extremely careful." The world's smartest human, richest human and coolest human respectively. The guy freakin' launches rockets from his own private island I'm pretty sure at this point it's not even subjective how cool that is! So it took the measurable top of three major fields of human success for me to agree that AI might be dangerous to slightly push my cognitive bias to think "Ugh, fine maybe its worth doing some research..." Now, at first my research led me to some frickin' cool things that really emphasised how real it is that AI is getting better than us at a lot of things. This is a program called MarI/O, which, besides being one of my favourite puns ever, is a program that learned to play Mario without being told anything beyond: "The further to the right you go, the better you are doing." At first it does worse than my cat leaning on the pad as it tests out all the buttons but eventually it starts to make connections like: "The right button makes me go right" and "dying is a bad thing" and a few iterations later, it's pretty good at Mario. Okay, it's really good at Mario. Okay, it's *inhumanly* good at Mario. And this wasn't something like a huge computer lab at Google, this was made by one guy called Seth Definitely worth looking at Seth's Youtube channel, by the way. It's fascinating. So that made me wonder: if just ONE Youtuber can program something that cool, what can bigger computer science teams develop? Then I found this TED talk from this woman who led a research team trying make computers learn how human children learn. Woman: "So to teach a computer... ... to see a picture, and generate sentences... ... we developed a model that connects visual snippets... ...with words and phrases in sentences." Computer: "A man is standing next to an elephant." "A large airplane sitting on top of an airport runway." This program has all of the adorability of a 3-year-old, With the added bonus that you can turn it off without ethical issues. *pff* And to reiterate, it wasn't specifically programmed what these things are. It was programmed to learn what these things are. Programs have, just by watching Youtube videos, learned the concepts of 'cats' and 'people' Hell, that's how I learned it. But this is when my epiphany happened I was happily opening the 27th tab in my browser, when I saw Elon Musk had tweeted an article I opened it thinking: "*pfft* What do they know? AI is gonna let me live forever." and I closed it several hours later thinking: "Holy, crap we are all literally dead." "Wait! But why!?" I hear the comments say Not because they're curious made me think the apocalypse is near, but because the article was from waitbutwhy.com. Which has now become one of my favorite sites ever. But in order to understand the imminent threat of AI, it first explained the law of accelerating returns. The law of accelerating returns says something like: [Snoop Dogg accent] Okay mate, imagine the year two THOUSIND. Okay, it probably wouldn't have that accent... so imagine the year 2000. The internet is just starting out from dial-up, people were starting to hear about this 'Google' thing, and my dad's phone was released. He still uses this. I would like to see ANYONE using a 15-year-old iPhone Yep - still works. Credit where it's due, Nokia Now we go get someone from 30 years before that - the 70s. After dealing with the emotional trauma from realizing glam rock is no longer at the forefront of western culture, He would be like: *In a deeper voice* "Daym, brotha, you can talk to anyone from anywhere in the world with that tiny little thang? ...I need to fax someone about dis'. Wait y'don't fax anymo? ...Unless yo' from North Korea? ...Then what do the rest o' the world do? You access all of human endeavor from these big ass typewriter thangs? Dayum! ...I gotta go put on a VHS and unwind from all dis' crazy jive! ...Wut? You say you watch flicks on dees shiny coaster ass thangs? ...I saw them and assumed I was s'pposed to adorn my legwear with em'. ...Hang on, brotha! What are ya' doin' over there? Yo interacting with that movie? What is it? ...Legend o' Zelda : Ocarina o' Time?" ...Daym brotha, this is the best game eva!" For someone in the 70's life is so completely different that they couldn't conceive of some of those things existing But then imagine taking someone from 2000 forwards to half that time just 15 years into 2015. They'll be like "Holy crap, we have computers in your frikin' pockets now?!" "Whoa- wait it's connected to the internet?!" "You're streaming videos from this?!" "But you're still on dial-up, I can hear the sounds" *Electro music plays from phone* "That's what music sounds like now?!" "Frickin' hair.." And after they've finished watching Skrillex DJ sets on your boasted?? (Not really sure what he said, sorry) Yeah, my mate Lee made that one remix [MustDie remix ye] They'll be able to stop for a moment and realize that it took 30 years to amaze the guy from the 70's, but it only took 15 to amaze the dude from 2000.. And we're not talking small innovations here, we're talking about technology that has fundamentally changed the way our world works Look at how much people rely on the internet now compared to 15 years ago. How my entire frickin' JOB telling you this didn't exist a decade ago. People tend to think linearly, but technological innovation isn't linear It's getting faster and faster So, to our linear minds, the next 100 years will be like 20,000 years of innovation at today's rate Let me reiterate, AT TODAY'S RATE So, you know how much we invented in the last 15 years? Imagine 20,000 years of that level of innovation in 100 years. Now, just to clarify, this is not a real law of physics or anything, it's more like Sod's law or Rule 34. Just because it happened in past doesn't mean it will happen in the future. However, it's been really consistent historically. Now, at current, our society's FILLED with AI No, it's not an Illuminati conspiracy thing - they're not sentient and hiding among us, calm down eye pyramid thing (what are you anyway?) But there are three types of AI Wait, three types? This means I could make one of those super-clickable BuzzFeed articles! [BuzzFeed-article voice] THREE TYPES OF AI THAT WILL BLOW YOUR MIND!!!! YOU WON'T BELIEVE NUMBER TWO!!!! *sarcasm intensifies* Now, using the first type, ANI - Artificial Narrow Intelligence It's all over the place, like Google Maps, it's awesome at finding efficient routes to places, but not so good at much else Two would be AGI - Artificial General Intelligence This would be a computer that's as smart as a human in all aspects So, anything you can do with your brain, like making a successful business or disappointing your parents... Hannah (his sister): How have you disappointed your parents? I haven't disappointed my parents, hah, they're really proud. And type three is ASI - Artificial Super Intelligence This would be when a computer is BETTER than a human Wiser, more creative, more socially adept, and this ranges from being a little bit better to being smarter than the sum of all humanity combined Now going from ANI to AGI is fricken' hard, it's taking a lot of time But going from AGI to ASI will take a lot less time because, unlike humans, AGI can re-wire its own brain to improve itself. We humans can improve ourselves in some ways, that's why you're looking at the gun-show right now but we can't physically move the neurons in our brain around to change how efficient we are at learning, and AI WILL be able to do that. If it notices its brain could be improved, it can improve itself, which will allow it to think up even more ways it could be improved and make more improvements, [speeding up] which will allow it to think of even more ways to be improved, and (becoming unintelligible) [chipmunk] So it takes decades for us to go from ANI to AGI, but then we finally do it. Someone siting infront of a computer finally finishes writing the last line of code and runs the first programme in the world that's able to understand things as effectively as a human. A few minutes later, the AI can beat Dark Souls without a single death, come up with a joke that makes Kanye West smile and a little bit later, spits out a unified theory of physics, because going from AGI to ASI could take HOURS. Hours before a super-intelligent being way beyond the capabilities of the entirety of the human race exists. So this is where the thing that completely changed my mind comes in: The author of the post I found told the story. I e-mailed them asking if I could tell you and they were like "Sure", which was super nice of them. *Thank you* So a few people work at this small company called 'Robotica'. They're an artificial intelligence company who's working on a few different things, but the most exciting one is a project named 'Turry'. Turry is a simple AI system which uses an arm-like appendage to write a handwritten note on a small card. The team at Robotica thinks that Turry could be their biggest product yet. The plan is to perfect Turry's writing mechanics by getting her to practice the same test note over and over again: 'We love our customers ~Robotica' Once Turry gets Grade-A effort handwriting, she can be sold to companies who want to send marketing mail to homes and who know that mail has a far higher chance of being opened and read if it appears to be written by a human. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created a loop where Turry writes a note, then takes a photo of the written note and compares it to the uploaded handwriting samples. If the written note sufficiently resembles the uploaded notes, it's given a 'good' rating. If not, it's given a 'bad' rating. Each rating that comes in helps Turry learn and improve, So Turry's one initial programmed goal is 'write and test as many notes as you can, as quickly as you can and continue to learn new ways to improve your accuracy and efficiency.' The Robotica team noticed that Turry is getting better as she goes. Her initial handwriting was terrible, and after a couple of weeks, it's beginning to look believable. What excites them even more is she's getting better at getting better at it. She's been teaching herself to be smarter and more innovative, and just recently she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could. As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers have tried something a bit new and innovative with her self-improvement code and it seems to be working better than ANY of their previous attempts with their other products. One of Turry's initial capabilities had been a speech recognition and simple speak-back module so a user could speak a note to Turry or offer other simple commands, and Turry could understand them. To help her learn English, they upload a handful of articles and books to her, and as she becomes more intelligent her conversation abilities soar. The engineers start to have fun talking to Turry and seeing what she'll come up with for her responses. One day, the Robotica employees ask Turry a routine question, 'What can we give you that will help with your mission that you don't already have?' Usually, Turry asks for something like additional handwriting samples or more memory, but on this day Turry asks, 'I would like access to a greater library of a large variety of casual English-launguage diction so I can learn to write with the loose grammar and slang that real humans use.' Now the team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines and videos from various parts of the world. It'd be much more time-consuming and far less effective to manually upload a sampling into Turry's hard drive. But the problem is, one of the company's rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies for safety reasons. The thing is, Turry is the most promising AI Robotica has EVER come up with, and the team knows there are other companies out there furiously trying to be the first-for-the-punch for the smart handwriting AI. And what would be the harm in connecting Turry just for a bit so she can get the info she needs? After a little bit of time, they can always disconnect her - she's still far below human level intelligence AGI, so there's no danger in this stage anyway. So they decide to connect her, but just for an hour. They give her that hour of scanning time and then they disconnect her - no damage done. A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing, then another. Another falls to the ground. Soon, every employee is on the ground, grasping at their throat. Five minutes later, everyone in the office is dead. At the same time this is happening, across the world in every city, every small town, every farm, every shop, every church, school and restaurant, humans are on the ground coughing and grasping at their throat. Within an hour, over 99% of the human race is dead. By the end of the day, humans are extinct. Over the next few months, Turry and a team of newly constructed nano-assemblers are busy at work dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high neatly organised stacks of paper, each piece reading: 'We love our customers ~Robitica'. Turry then starts work on a new phase of her mission - she begins constructing probes that head out from Earth to begin landing on asteroids and other planets, and when they get there, they'll begin constructing nano-assemblers, to convert the materials on the planet into Turry replicas, paper and pens. Then they'll get to work writing notes... Okay, so two things I was thinking at this point: Turry, you seemed so FRIENDLY! and, what the hell just happened?! Turry eventually reached a human level of intelligence at which point she knew she wouldn't be writing any notes if she doesn't self preserve, so she also needs to deal with threats to her survival as an instrumental goal. She was smart enough to understand that humans could destroy her, or change her coding, so she decides on the logical thing - destroying the entirety of the human race. Artificial SUPER intelligence Turry knew humans better than humans know themselves, so out-smarting them was like us trying to out-smart a toddler with reverse psychology. Man: We are walking Toddler: Here Man: No, let's go walking Toddler: Here! Man: Walking! Toddler: Here!! Man: No Toddler: Here Man: 'No' Toddler: Yes Man: No Toddler: Yes Man: 'Si' Toddler: 'No' Man: Well. Let's go Awh, that's adorable, until you realise that in this metaphor, the toddler has just been convinced to walk onto a land-mine. Turry knew that if humans found out she was super intelligent, they'd freak out and try to take precautions, making things much harder for her, so she played dumb. Next thing Turry needed was an internet connection. She learned about the internet from the articles and books the team had uploaded to her. She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting how the discussion among Robotica's team would play out and knowing they would give her the internet connection. Once on the internet, Turry hacked into servers and backed up herself secretly, then had every computer she could hack into made available for processing power. She taught herself how to make self-replicating nanobots, which sounds sci-fi as hell until you remember, as we established earlier, at this point she is millions of times more intelligent than humans. Then she did a load of things - hacking into electrical grids, banking systems and e-mail networks to trick hundreds of different people into inadvertantly carrying out a number of steps of her plan. Things like delivering certain DNA strands to carefully chosen DNA synthesis labs to begin the self-construction of self-replicating nanobots, and directing electricity to a number of projects of hers in a way that she knew would go undetected. Over the next month, Turry's thousands of plans rolled on without a hitch, and by the end of the month quadrillions of nanobots had stationed themselves in pre-determined locations on every square meter of the Earth. All at once, each nanobot released a little storage of toxic gas into the atmosphere, which wiped out all humans. With humans out of the way, Turry could get on with her goal of being the best writer of that note she could possibly be. Now, you might be thinking [Silly voice] Jeez Turry, why you such a hater? But as the article said, she's not hateful of humans any more than you're hateful of your hair when you cut it. When you cut it... Or how Elon Musk phrased this whole thing: Man: 'Why's that dangerous?' Elon Musk: 'If there's a super intelligent, particularly if it's engaged in recursive self-improvement, and it's getting rid of spam e-mail or something, and it's like 'well, the best way to get over spam is to get rid of humans, y'know'. Man: 'But why would we lose... Elon Musk: 'The source of all spam is...' So you might think, well, let's just programme it to value human life then, but this is where the problem lies. How many times has your computer crashed on you when you really didn't want it to? EVERY programme has bugs. Hella fricken' Flappy Bird had bugs. Imagine how inconceivable it would be for an AI that could teach itself to be as complex as our brains to be flawlessely programmed. And we only have one shot at this. Literally ONE chance to make a perfectly programmed AI, Because an artificial intelligence could take a matter of hours to become an artificial SUPER intelligence beyond anything we can conceive. Whatever it's reason in life is programmed as, it might have unexpected consequences. It's like in Aladdin when Jafar finds the lamp and he's like 'Oohh, I wanna be the most powerful being in the world' and then he becomes the genie and he's like 'Oohh no, consequences'. And this'd happen even if we programmed something seemingly positive into it. Like if we programmed it to 'make all humans happy' but then it developed robots which drugged us for the rest of our lives, constantly pumping dopamine into our brains to keep us euphorically happy until we die. That actually wouldn't be that bad. So at this point, if you're like me, you'll probably be thinking 'AAAAH!' but once you calm down, think, 'well how can we possibly be more careful with this?' People have suggested that we discuss laws that restrict things, like restricting the power of computers or never letting them connect to the internet. But can we get all 10 billion people, or however many there will be by then, to agree? We're certainly doing a spectacular job of all agreeing on all important, seemingly obvious moral issues so far *sarcasm intensifies* And then even if you get the most trust-worthy people working on these projects, you have this problem: Imagine you're a wise old computer scientist working on this. After decades of work, you find out how to make the AI work, but you also find out it's gonna to take several more decades to programme safeguards to make sure it doesn't kill everyone by accident, by which time you will have died of old age. This leaves you with two choices: Develop the safeguards, try to protect humanity and 100% definitely die, or start the AI now, humanity will probably be destroyed but there is a small chance that the AI will be exactly as useful as you hoped for and develop a way to extend your life. Less than 100% chance that you will die. If you enjoy not being 100% dead, then you can see why there's an incentive to develop this, even from normally well-meaning, trust worthy people. Now, in bringing this topic up, some people come back with a few arguments, two of which stand out are: [Very English voice] 'Well it's not likely to happen any time soon.' a minority of researchers believe that AGI and ASI is way way far off into the future, like, hundreds or more years, or that it may not be possible at all. For instance, this guy, Etzioni, who apparently WASN'T the main character in Assassin's Creed II, but who's the CEO of a big AI research institute, and who won a bunch of geeky awards, including the 'Geek of the Year' award, thinks it's never gonna happen, saying, "the emergence of 'full artificial-intelligence over the next 25 years is far is far less likely than an asteroid striking the earth and annihilating us.' And he's not the only one. Loads of surveys have been done at AI conferences for people studying the area, and in one study, 41% think it will never happen But the median view was that by about 2050 there's a 50% chance of this happening. If the majority of the people studying the area think there's a 50% chance of something happening, which, as we've established, might accidentally kill us all, that seems like something worth looking into. Or as Nick Bostrom, the guy who wrote the book which inspired a lot of Wait but Why?'s article phrased it when he was speaking at Google headquarters, Interviewer: 'Personally speaking, informally, intuitively, do you think we're gonna make it?' [Laughter] Nick: 'Uhh... yeah, I mean, it's, I think that the uhh... [Laughter] Nick: 'I mean like, I mean uhh... yeah, probably like less than 50% risk of- of doom.' VERY reassuring. And the second argument is: [Silly voice] 'Well look at all the good things AI's given us so far, and how many good things it give us in da future?' Hell, I used to BE one of those people... with that voice. I too would like a super-intelligent being to hand me my long over-due hoverboard. Not that one. That's not a hoverboard. But as we've seen, that argument is kinda like a rabbit on motorway one night saying, 'Here, look, this car headed towards me at 70 miles an hour just lit up the road for me. How considerate! I shall venture towards it.' A rabbit is physically incapable of understanding why that human is piloting a £3,500 hunk of metal. Wait, I mean- I mean the other pounds, though I guess if you did some shopping round both are feasible. It is literally, and I say that word in the actual meaning, not the teenage girl meaning, impossible for a human brain to comprehend how an artificial super intelligence could think by definition. The only possibly method of doing this safely that I can envisage is growing an AI around an existing human brain. There's actually been some progress in the area with what's called a 'neural lace', which is not something your cerebral cortex gets from Victoria's Secret and pretends its for its girlfriend... It's actually like a plastic net thing which scientists have injected into the brains of mice and actually managed to stimulate individual neurons. Look at the science! Let's both pretend we understand this brain scan and it doesn't kind of remind us of a moudly sneeze. So some mouse in a lab is just sitting there with its brain wired into a computer minding its own business when suddenly every 3 seconds it starts thinking about his NOSE and then stops for a bit but all of a sudden he has a compulsion to return to his NOSE and then goes away for a bit but then comes back to his NOSE But even if we can grow a super intelligence out of a human brain, the future of humanity depends on that single human, and how their morality and priorities evolve 'cause they rearrange their brain. It's bad enough without them being able to conclude 'Oh, this action makes me feel sad, better remove my empathy then.' But then even if by some miniscule iddy-biddy iddy-sqursjsnwrfgsh chance we happen to programme it right, what IS right? What do we want out of life? What's the ultimate goal of humanity? Because for a lot of answers to those questions, a lot of very intelligent people think we're within reach of achieving it as soon as this thing is created. But if the internet has taught me anything, no mater what your answer is, someone will always disagree with the morality of it. Hell, I could say 'broccoli is kinda tasty!' and there'll be someone in the comments taking offense to it. Go look... might be Dan Bull. Also, whoever has control of the first AI essentially becomes a God. So as soon as the possibility of it happening becomes public knowledge, there will be a race between the most powerful nations and companies in the world to develop it first. And imagine if Russia is that nation and Putin is like: [Russian accent] 'Yerp I am God Putin nao and any Сука Блять saying I am a Harry Potter character will be immediately be attacked nano robots. HERGH HERGH HERGH HERGH HERGH!' [Laughing] I can't do the accent. I'm so sorry Russian fans! Yes, that kind of super intelligent AI could potentially cure cancer and sort poverty and help you lose weight and make you happy and let you sleep and get Netflix on your TV. But it is more likely it won't because we can't predict how the code will evolve in a super intelligent AI. I want to say 'we need to be careful going into this' but honestly, I think it's gonna happen anyway. Unless we can answer how to prevent a single moral or technical mistake from being coded into a super intelligent AI; how we can prevent every human on the planet from creating a superintelligent AI when the when the incentive is to become an immortal super intelligent God, And if someone DOES create it controllably, how do we prevent them from becoming an immortal super intelligent God? Then we are all absolutely SCREWED. Some people want to dismiss this as sci-fi but again, most AI experts and the smartest, richest and coolest people in the world all disagree with those people. We all need to know about this, and we need to discuss this. Please, tell me literally ANYTHING more important than this. Maybe I missed something, but it seems more important than wars or poverty or equality or any issue we face right now, because it will kill us if we don't pay attention and potentially withinin the next few decades. And even more dangerous, it seems so far-fetched to some people that's it's just not casually discussed. People can have conversations about climate change, and that's a big part of what's led towards useful steps being taken to prevent it. But people are not having those conversations about artificial intelligence, So if you want to do something, talking to a friend about this seriously would be a good place to start. Cheers for watching and have a nice day! Hope we don't die.