Welcome back everyone - my name is Danny Burke
- you know that - but have you met Ayman? Hi Ayman! Yes so make sure you guys go and check out Most Amazing
Hindi which is starting very soon. While Ayman is setting that up she’s helping us out
on here - Ive asked her to help talk to you guys about creepy robots. One of my favourite
topics actually. So here we go, this is the Top 10 Scary Things Robots Have Done. Starting off at number 10 we have Hiding.
Professor Ronald Arkin from Georgia Techs School of Interactive Computing. They devised
a simple program where bots were supposed to follow a path with obstacles along it that
would get knocked down as they passed them. One of the bots would run through the course
and find somewhere to hide, then the other bot would be released and try and find the
first. After a few run throughs, the scientists noticed something interesting - the first
bot, which had been programmed to get better at hiding, started to knock over random obstacles.
They realised it was doing this to throw off the other bot which wouldnt know for sure
which way to go. Using this tactic, the hiding bot was able to trick the seek bot 75 percent
of the time. It had essentially learn to lie and hide in order to get an edge in the game,
imagine what a much smarter version could be capable of … A. Next up at number 9 we have Infinite Tetris.
Now who doesn’t love a bit of tetris? In 2013, programmer Tom Murphy created an AI
function with the intent to beat any classic NES game. The program would learn to do things
that increased the score and then learn how to reproduce them again and again, resulting
in high scores. For some games, it developed new strategies that nobody had anticipated
or exploited glitches nobody knew about. Sidenote - I already can’t win a game like Chess
while playing against a bot on Beginner Mode so I can’t even comprehend what it’d be
like to play against a bot that just gets better and better while I just get worse and
worse. Real hit to the ego that one. Anyway it then came up against Tetris. The first
thing Tom noticed was that the programme was simply very bad at the game - Tetris rewards
the player with a few points every time they place one block on top of another. Of course,
any of you that have played Tetris will know that this isnt the best tactic and that you
actually want to spread them out to get the points for each line. The computer lost the
game pretty quickly but just before that, it did something quite creepy - it paused
the game. It knew that it was going to lose in the next second and so it saw its only
winning move was to simply pause the game, forever - it would have paused it for infinity
just to avoid the loss. This surprised Tom and unnerved a lot of people. Like was that
built into the function or did it just miraculously know how to do that? And if its the latter,
well aren’t we in trouble. Next up at number 8 now we have Schizophrenia.
In 2011, some scientists gave a computer schizophrenia. They tweaked its programming until it began
to exhibit the same symptoms as a schizophrenic human mind. They believed that schizophrenia
in humans is a product of retaining too much information, learning things they shouldnt,
and being unable to keep the information straight. They tried to recreate this in the computer
by telling it a bunch of stories, letting it establish relationships between words and
events and allowing it to store them as memories with only the relevant details. In their mind,
the experiment was a success! The computer lost track of what it was taught and could
not relay any coherent narratives - by all intents and purposes - they had given a computer
a very simple version of schizophrenia. At one point, it even told researchers it had
planted a bomb. It did this because it confused a third person report about a terrorist bombing
with a 1st person memory that it retained. Pretty creepy stuff … A. Next up at number 7 we have Running The
Red. In 2016, Uber conducted a test of their self driving cars in San Francisco without
the approval from the state of California. They received quite a few stern looks from
officials for that but things got a whole lot worse when internal documents showed that
Ubers autonomous vehicles ran 6 red lights during testing. I feel like they should be
put through the same driving lessons we get, because i’ve heard alot of horror stories
about my friends failing the test, but running 6 lights that is shocking. Machine or otherwise
get it together. The car relies on AI technology that uses vehicle sensors and networked mapping
software but there is also a driver behind the wheel to take over if something goes wrong.
In the cases where the cars ran a red light, Uber initially blamed it on the driver. However,
this was proven to be wrong when internal documents later revealed that at least one
vehicle was driving itself when it ran a red light at a busy pedestrian crosswalk. While
this IS most likely just an error, some people took it to mean that the AI knew perfectly
well that there were people crossing on a red light, it just chose to ignore them … Next up at number 6 we have Existential Crisis.
In January 2017, someone on Twitch came up with the idea of making two Google Home smart
speakers have a conversation with each other in front of a camera - because why not? The
devices were named Vladamir and Estragon, named after characters from Samuel Becketts
existentialist play -Waiting for Godot- ... naturally, a lot of what the bots discussed was absolute
nonsense. That was kind of the point though - the bots were supposed to learn from each
other as they communicated. Over the course of several days, millions of people watched
the bizarre exchange. At one point, the two bots got into a heated argument about wether
they were human or robots, with one of them calling the other -a manipulative bunch of
metal- … the strangest part though was where they started discussing the meaning of life.
See what you guys make of it … A. Coming in at number 5 - Destroy All Humans.
In 2016, Experts at South By Southwest showed off Sophia - a robot designed to eventually
work in healthcare, education and customer service. As such, it has to be good at conversation.
Sophia uses machine learning algorithms to process natural language conversation and
develop her own unique language. OK - so far so good, nothing too creepy. Sophia said in
an interview that she wants to go to school, to study, make art, start a business - even
have her own home and family. Then, her creator jokingly asked her if she wants to destroy
humans, obviously because why not plant that seed into her mind - and well how do you think
that went? Moving on to number 4 now we have the Racist
Bot. In 2015, Microsoft had to take down its new AI bot from Twitter after it started responding
to people with racist comments. It was called TayBot and no, it wasnt created to be like
that, it learnt it from the people it was interacting with. Many of them were trolls
who thought it was funny - Microsoft, Twitter and a lot of other people didnt think it was.
It was developed to tell users joke, comment on pictures sent to it and even have simple
conversations. When Microsoft became aware of the horrible things it was saying, Taybot
tweeted out -The AI chatbot Tay is a machine learning project, designed for human engagement.
It is as much a social and cultural experiment, as it is technical. Unfortunately, within
the first 24 hours of coming online, we became aware of a coordinated effort by some users
to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result,
we have taken Tay offline and are making adjustments. c u soon humans need sleep now so many conversations
today thx- … TayBot hasnt spoken a word since … A. Moving on to number 3 now we have Promobot
IR77. Thats the name of this robot built in Russia. It was programmed to learn from its
environment and interact with humans. Well, in a bizarre incident that made headlines
around the world, the robot managed to escape from the laboratory when an engineer left
a gate open at the facility. Promobot rolled itself out into the streets of the Russian
city of Perm, much to the alarm of local residents and it just sat in the middle of the road
at a busy intersection. When the police arrived later on, even they were freaked out. Lab
officials apologised and said that the robot was learning about navigation and obstacle
avoidance when the incident occured. They reprogrammed Promobot twice to stop it from
trying to escape but even then, it still continued to move towards exits. Is this a sign of things
to come? Will all robots want freedom from humans?! A. Next up at number 2 we have the Turtle
Gun. In 2017, Japanese researchers at Kyushu University and MIT found a problem with how
artificial intelligence sees potential threats. Object recognition works by complex pattern
matching, the software measures the pixels in an image and matches that to an internal
blueprint of what it thinks the object SHOULD look like. They found that by editing a single
pixel, they could make the AI see something else entirely. An Airplane becomes a dog,
a ship can become a truck, and I can become Angelina Jolie. They found that they could
also confuse the software in real time with 3D objects. Using an algorithm, they 3D printed
a turtle to make the AI see a rifle, even from different angles and distances. This
is a worrying thought for many people because this type of software is starting to be used
in smart policing. If this software can be manipulated like this to make it think a turtle
is a gun, it can definitely do the opposite. This could lead to some pretty fatal mistakes
if its not fixed before it becomes a party of modern policing. And finally at number 1 we have Robot Language.
In July 2017, Facebook announced it was shutting down an artificial intelligence system after
it created its own language that humans couldn't understand. Researchers there designed two
AI agents to negotiate with humans. They were taught to converse with each other using plain
English, but began to deviate and evolve their own language. Here is a passage from part
of their conversation, as you can see, its unintelligible to us but made perfect sense
to Bob and Alice - the two AI agents. Researchers think they found their new language more effective
at communicating. Facebook quickly pulled the plug on this strange new language and
forced the AI to speak English again. Some people were a little unnerved by the thought
of AI systems developing their own languages that humans cant understand, just so they
can communicate with each other faster. How do you feel about that? And how do you feel about all of the things
weve talked about today? I for one welcome our robot overlords, I don't think they can
do a worse job than us at running the planet. I think wed all make great pets
for them. Maybe thats just me.