Hello and welcome to Crash Course: Navigating
Digital Information. My name is John Green, and you may know me
from my various channels on YouTube, all caps tweets about Liverpool Football Club, Q&As
about books on my website, or elsewhere on the internet. I spend a /lot/ of time online. In fact, in some ways, I live here. The average American spends 24 hours per week
online, but one in four U.S. adults say that they are online almost constantly. And I am among them. I love the Internet--it contains so much helpful
information; it connects us to each other; it allows more people to have a voice in public
conversations. But of course, the Internet is also littered
with misleading, sensationalized, and downright false information. So, OK. I only know two jokes. I’ll tell the other one at the end of the
series, but here’s the first one, which was made famous by the American writer David
Foster Wallace: Two young fish are swimming along one day
when an older fish swims past and says, “‘Morning, kids. How’s the water?’ The young fish just look at each other for
a second and then swim on for a while, and then one says to the other, ‘What the heck
is water?’” Now I am not the wise old fish of this enterprise. I am as susceptible to misleading information
as anyone. I tend to focus on information that reinforces
my pre-existing worldview, and to passively ingest all kinds of media while scrolling
and swiping endlessly through my feeds. But I also think we ought to be suspicious
of anyone who claims to be the wise old fish with some special understanding of what we’re
swimming in. Believing that you’re immune to the seductions
of false and misleading information is, if anything, a symptom of being influenced by
false and misleading information. I tell this joke for two reasons: First, because
I need you to call me out if I start acting like the wise old fish, and second, to point
out that much of what we’re swimming in is new and strange--and we’re still figuring
it out together. So, for this series, Crash Course has teamed
up with MediaWise, a project out of the Poynter Institute that was created with support from
Google. The Poynter Institute is a non-profit journalism
school. The goal of MediaWise is to teach students
how to assess the accuracy of information they encounter online. The MediaWise curriculum was developed by
the Stanford History Education Group based on civic online reasoning research that they
began in 2015. Other MediaWise project partners include the
Local Media Association and the National Association for Media Literacy Education. I’m saying all that, and I’ll say it again,
because I think it’s important to understand where this information about information came
from. Over the next ten episodes, we’re going
to dive deeply into the feed and share some tools that are proven to work when it comes
to evaluating the quality and accuracy of information. We may not figure out exactly what water is,
but we’re going to try to learn to improve our swimming. Stan, have we rolled the intro yet? We’re MULTIPLE minutes into the video. Roll the intro! INTRO
When you want to see what your friends are up to, you might head to Snapchat, WhatsApp,
Instagram or maybe /Fin/stagram. I don’t get that joke but young people in
the office said that it is funny. And then when you want the news, you may wait
to be startled by a push alert from a news app, or you might go to twitter, or snapchat,
or reddit. And when you need to settle a feud over how
to pronounce g-i-f, or possibly gee-i-f, you just use a search engine. These habits all feel quite natural to me,
but in fact they are part of a huge shift in how humans find, and produce, and share
information. Just a short time ago, the production of information
was controlled by a much smaller group of people. Instead of Googling movie times, you had to
buy a newspaper or call the movie theater and risk talking to an actual human being. To write a research paper, you had to hunker
down in the library, not for the outlets and the free Wifi but for the access to Encyclopedias
and books. Now I should note that there’s a lot of
information that’s not available online, and that is available at your library. Libraries continue to be incredibly valuable
resources. But these days, anyone can hop online and
produce information via their personal website, social media, or YouTube channel. Well, actually, no. Access to digital devices and high-speed Internet
is still a real barrier to entry for many people, which means unequal access to information. It also means that while it can feel like
everyone is participating in facebook or instagram, in fact billions of people are not part of
those conversations. Still, the barrier for creating and retrieving
information is much lower than it was a generation ago. Like, when I was a kid, if you wanted to share
an opinion with the public, you wrote a letter to the newspaper and hoped they would publish
it. There was no other way for a stranger to hear
your story or your perspective. Furthermore, as you already know from the
three DMs you’ve answered since you started this video, the internet changed how we communicate. We can talk across time and space. We can connect across geographical and political
boundaries, we can create organizations and communities, find people with similar interests,
or we can lift people up when they feel alone. But, when information flows this freely, dangers
are inevitable. Misinformation -- unintentionally incorrect
information -- and disinformation -- information that’s wrong on purpose -- spread quickly
online. As do hate speech and propaganda. Plus, we can easily create online worlds where
we only see information we already agree with, or that lines up with our point of view. For instance, if I only followed people on
Twitter who were Team Blake, I would have been pretty blindsided when Garrett won The
Bachelorette. The same could be said for, say, actual elections. And because we use information for all kinds
of decisions, misinformation and disinformation are powerful. This is true for small everyday decisions--restaurant
reviews affect where we eat--and for much larger issues, like choosing a college to
attend or a place to work.. The quality of our information directly shapes
the quality of our decisions. And the quality of our decisions, of course,
shapes the quality of our shared experience as humans
So, when we talk about [air quotes] “bad” or questionable information, that includes
fake news. The kind of news reporting that is /totally/
false. Which is a huge problem, especially on social
media and during breaking news events. And it’s a problem across all political
ideologies and perspectives. But we’re not just talking about fake news. We’re also talking about information that
isn’t credible because the author of that content isn’t an authority on the topic. Take a blog of serious-sounding fitness tips
from someone who loves gym selfies but isn’t qualified to give professional health advice. We’re also talking about information that
comes from writers or organizations that have something to lose from the whole truth. Like a company that sells toasters creating
BestToasters.com to publish lists of the “best” toasters, with their brand at the top of every
list. Or friends who conveniently find videos that
supposedly [air quotes] “prove” gif is pronounced gif when you know that gif is pronounced
gif. But the thing is, quality of information lies
on a spectrum. It’s not a duality, good information and
bad information. It is our job to evaluate the information
that we receive, find out where it falls on that spectrum, and decide how to use it going
forward. But as a species, we are not particularly
good at judging the quality of information on the internet. In fact, we’ve always been bad at it. In 2002, a study with over 2,000 participants[1]
reported that a website’s /design/ was the most frequently mentioned factor in judging
a website’s credibility. When asked to choose which of two sites was
more credible, 46% of participants used the look of the website in their evaluations. Adults and young people alike still typically
evaluate information based on factors unrelated to its content: how it looks, whether they’ve
used it before or who referred them to it. In 2016, our friends at the Stanford History
Education Group released a study of over 7,000 middle school, high school, and college students. When asked to evaluate online information,
they based their evaluations on a site’s look and feel. They focused on things that a website creator
could easily change, like the URL or the About page. Spoiler alert: that technique doesn’t work
well. One of the things that participants had to
do was judge minimumwage.com, a site about -- you guessed it -- the minimum wage. It claimed to bust myths behind the minimum
wage, listing ways that raising it would hurt the economy. Many students never discovered that that site
was by a public relations firm working for a group that wants to keep minimum wages low. The firm represents industries that stand
to benefit from paying employees less. In other words, the creator of this website
has something to lose by telling both sides of the minimum wage debate. So we can’t fully trust them to do so. Let’s go to the Thought Bubble. During the study, some students also felt
the presence of certain types of content on a website meant that it was more reliable. Like, when students found something they thought
was evidence on a page -- a statistic or an anecdote, perhaps --
they assumed that meant the entire page was more reliable. And they often didn’t check the sources,
because, you know, it’s the Internet. People never check sources. For example, participants also looked at an
article that was actually an advertisement for Shell Oil[2]. 70% of high school students rated it as more
reliable than a traditional news story. Why? Because of this pie chart at the top. Statistics and infographics are often easy
and effective ways to communicate facts and evidence. But that doesn’t mean all charts are trustworthy. Like, here’s another chart. It says that, 96% of the time, the sky is
green. The /existence/ of this chart is no more proof
of its validity than, say, a spooky noise is proof that your house is haunted. But back to the Stanford History Education
Group study. Over 80% of middle school students didn’t
correctly identify that this was an ad, either, even though it was labeled “Sponsored Content.” Sponsored content means a company paid the
publication for a space on its site, hoping to advertise with a post that /looks/ like
a news article. And as you may know, sponsored content shapes
a lot of discourse on YouTube. And it’s effective advertising, because
many of us can’t help but believe that what looks like a news article must in fact be
one. Thanks, Thought Bubble. You might argue that the students in that
study are still learning. They’ll probably be better at it when they
get older. Well, the Stanford History Education Group
also tested historians with PhDs, first year college students from a pretty fancy university,
and professional fact checkers from major news organizations. Fact checkers are the people who go through
each bit of copy in a news story to make sure that all the facts are accurate. There are far too few of them in this world. But anyway, how effectively would you guess
these three groups evaluated information quality? Although both the professors and the students
have achieved academic success and are smart, thoughtful people, they also didn’t do well
with the experiment. When evaluating online sources, they also
focused on superficial things like the sites’ layout, how much content the site had, and
whether it linked to other sites. They focused largely on appearance and the
/presence/ of things like evidence and links, not their content or their value. And those strategies might have worked in
the early days of the internet, but things are much more complicated, and there are many
misleading or false stories cite sources that either don’t say what they’re purported
to say, or are themselves also false. It’s misinformation all the way down. So, who /did/ sort out the misinformation
from the good info? The fact-checkers! I mean, that is literally their jobs, but
it’s nice to know they were good at it. The fact-checkers did well because they employed
a variety of carefully honed skills to decipher fact from fiction. And we are going to learn those skills together
from the fact-checkers in the next episode. Also the one after that, and the one after
that and the one after that. We’re going to fact checker school! In the meantime, if you’re interested in
learning more about MediaWise and fact-checking, you can visit @mediawisetips on Instagram. Thanks for swimming with me. I’ll see you next time. For this series, Crash Course has teamed up
with MediaWise, a project out of the Poynter Institute that was created with support from
Google. The Poynter Institute is a non-profit journalism
school. The goal of MediaWise is to teach students
how to assess the accuracy of information they encounter online. The MediaWise curriculum was developed by
the Stanford History Education Group based on civic online reasoning research that they
began in 2015. If you’re interested in learning more about
MediaWise and fact-checking, you can visit @mediawisetips on Instagram. ________________
[1] https://dejanseo.com.au/media/pdf/credibility-online.pdf [2] https://sheg.stanford.edu/civic-online-reasoning/comparing-articles