Your life, whether you like it or not, is
already being directly influenced by AI. You’ve discovered this video in particular,
out of billions, because the neural network behind YouTube’s recommendation system has
specifically chosen you to see this video’s thumbnail. These algorithms are the backbone of various
social media platforms, they control what you see and what content surfaces to different
audiences. For the most part, they work pretty well. However, they also provide a breeding ground
for something darker. Another, more sinister way AI can affect you
online. A way that plants a seed in your mind that
you might never realise. In 2019, Elon Musk’s research organization,
Open AI, unveiled details on its latest experiment. It’s an AI called “GPT-2” and it’s
a “transformer-based language model”. Which basically means that it’s an AI that
specializes in writing text. Its goal was simple: predict the next word,
given all previous words within some text. Give it an input and the AI would complete
what you wrote. It was trained on text from 8 million web
pages, which is about 40 gigabytes of data. Language models aren’t exactly new, but
this was at a scale that had never been done before. The AI learned how to write in English with
unprecedented quality. What it wrote made sense and was eerily coherent. It also demonstrated abilities in question
answering, reading comprehension, summarization, and translation. These tasks were never the main objective
of the AI, indicating that It learned new, high level concepts about language without
any human directive. Tasked with writing an article about unicorns,
the AI produced some consistent, intelligible text that you could undoubtedly pass off as
written by humans. Honestly, it’s pretty creepy. A robot doing all of your homework is the
dream of pretty much any school student, the idea of having a personal essay writing AI
might be closer than you think. The website talktotransformer.com is based
on the GPT-2 model mentioned earlier, and we’ll be using it to demonstrate the technology. This AI, in specific cases, can form reasoning
and provide evidence behind any belief, ideology, or concept. Based on a couple of keywords, this AI can
generate logical essays and short responses that you could turn into your teacher. To demonstrate this, inputting the phrase
"Australia is known for..." returns a 112-word paragraph about the country's influence on
self-driving cars, with even cited sources. These types of language-based AI could start
interacting with your daily life pretty soon. Most phones have a virtual assistant, such
as Siri or Google assistant, and they could get a pretty big upgrade with new developments
in chat bot technology. Google AI developed Meena, a chatbot that
can flawlessly imitate natural human conversation. Traditional chatbots have very closed-ended
conversations. Meena, on the other hand, would ask about
the user’s day. She would also help the user find things to
do, such as recommending movies, and offering her own "insight." Meena was trained on 8 times more data than
GPT-2, except all the data it's trained on was from real, human conversations. Google has not yet released Meena to the public. But at the end of April 2020, Facebook had
also developed their own chatbot as well, called “Blender”. They’re claiming that Blender is even more
human than Meena, having the ability to express knowledge, personality, and even empathy. Unlike Google, Facebook also open-sourced
Blender. The future is looking pretty bright. Soon, these engaging conversations could be
accessible straight from your phone, making us feel a little less lonely. These language-based AI systems are only going
to keep improving from here on out, despite that, there’s a reason why OpenAI chose
not to initially release the model and code for GPT-2. And it stems from the deep fear that it would
be used maliciously. How do you know what you see online is real? You click on an article and read it, it’s
coherent, and it sounds believable. You read a post on Facebook, and you think
the same. You see a tweet with a controversial opinion,
and it has a ton of likes. Seems legit, right? Slowly but surely, that opinion that seemed
pretty popular grows to become yours. Estimates place at least 40% of the internet
traffic we have now originating from bots, not humans. There’s already swarms of misleading content
polluting the internet, but you've already heard of fake news, so what’s the big deal? What happens when most of everything you see
online is now generated by language-based AI? Using the concepts these AI systems have learned
about language, they inevitably have the ability to generate text that is believable, but isn't
100% true. When formatting arguments, the AI is able
to predict when quoted evidence is necessary. It uses made-up names and articles which advocate
its opinions. These are only some examples of what can make
these generated texts even more convincing. To show you an example of this in use, we
made up a pretty radical opinion and ran it through the GPT-2 model. We told the transformer: "Hitler was a good
man because...". The results speak for themselves, the AI came
back with a variety of responses that all try to uphold the given statement. Now imagine this, but with any opinion you
want to enforce. Yet what’s scary about AI-generated fake
news isn’t necessarily the fake news itself. What’s scary is how widespread misinformation
can become. Currently, organizations have to use humans
to write these articles and comments. But there can only be a limit to that. These new systems give any bad actor the ability
to essentially “manufacture” opinions at scale. By controlling language and what people see
online, you’re able to skew people’s opinions and perspectives to what you want them to
be. Public opinion is more important than we can
imagine. Public opinion embeds itself in law, it sparks
revolution. Control public opinion, and on a grand enough
scale, you can change laws, and you can suppress revolution. You can fundamentally control a population. This all sounds pretty 1984-ish. The fact that OpenAI initially didn’t release
their transformer network to the public didn’t bode well with the machine learning community. There was fierce controversy with this decision,
OpenAI wasn’t so open after all. The only significant barrier to acquiring
and fine-tuning an AI like this for your own purposes is computational power. It would cost around $40,000 in cloud computing
resources to train a model at the scale of GPT-2. This is a lot for the average person, but
sadly in the eyes of a moderately interested organisation, it isn’t. It's only a matter of time before a vastly
improved model could be commissioned for use privately by an “unethical” organization. Making the AI private, while also extensively
documenting how it worked only drove a bridge between the general public and the wealthy. The normal researcher wouldn’t have the
resources to replicate this AI system; thus, regular people wouldn’t be prepared to face
this technology. However, 9 months later, OpenAI had actually
released the full-sized model to the public. Their statement indicated that they did this
to “help people better prepare for and detect synthetic text”. OpenAI themselves stated that detection is
challenging and that it’s within the realm of possibility that these types of AI can
be fine-tuned to generate propaganda for a variety of purposes. We might never know if these technologies
are being used right now, but we can be vigilant knowing that it’s completely feasible that
they are. You can easily protect yourself from misinformation
by following a few simple measures. Measures including but not limited to: using
your brain, fact-checking, cross-referencing and using reputable sources. Sadly, the majority of people don’t do this. But, we can start now. Let us know your thoughts about this topic
in the comments section below. If you found this video interesting, feel
free to like the video and subscribe for more. We’ve recently launched our patreon page,
where you can recieve some cool perks like early access to videos and behind the scenes
access to our production process. We thank anyone that chooses to support us,
it helps us continue to provide the best quality content we can.