Become a sustaining member of the
Commonwealth Club for just $10 a month. Join today. Hello and welcome to this evening's
meeting of the Commonwealth Club of California. I'm Eric Siegel, chair of the club's
Personal Growth Forum and your host. We invite everyone to visit us online
at Commonwealth Club.org This evening we continue our series
of talks about false narratives. And their their cousins
conspiracy theories which can damage the shared fact
based on which democracy depends. Whether through distorted context,
misleading editing, oversimplification, incorrect
extrapolation from a few examples or just outright lying,
the result is the same. There can be a loss of trust
in institutions, tribalism,
and a search for an authoritarian leader and confusing times, increased stress levels and anger in society,
and resulting legitimization of violence. It's therefore important that we look
at the causes of false narratives and some possible actions
we can take to decrease their power. Our first talk in this series
on September 1st by Joe Pierre, was a tutorial
on the psychology of false narratives and the social and technological factors
that made them so powerful today. Then on September 6th, Lee McIntyre
discussed how to talk with a friend or family member who's fallen
into the trap of a conspiracy theory. This third talk by Dr. Sam Woolley will be about actions
we can take as a society and as individuals to reduce the power
of false narratives in our world. Sam Woolley is an ethnographer,
so he takes a broad and culture centric view of the impact of force narratives
and the motivations of the people behind them. He doesn't just look at technologies
or journalism or law or another somewhat narrower field. His book, The Reality Game How the Next Wave of
Technology Will Break the Truth, explores the ways in which emergent technologies
are already being leveraged to manipulate public opinion. And he proposes
strategic responses to these threats. He's currently working on a book
to be titled Manufacturing Consensus, which explores the ways
in which social media and automated tools such as bots,
have become global mechanisms for creating illusions of political support
or popularity. He's also author of numerous academic
articles, book chapters on the use of social media
for political manipulation. And he's the founding director
of the Digital Intelligence Lab at the Institute for the Future. He's currently the project director
for propaganda research at the Center for Media Engagement
at the University of Texas at Austin. And he was previously director
of Research of Computational Propaganda at the University of Oxford
and visiting faculty fellow at the Center for Information Technology Research
in the Interest of Society at UC Berkeley. Because he's worked with so many top
executives and government officials and was a resident fellow at the German
Marshall Fund's Digital Innovation Democracy Initiative,
he's deeply knowledgeable about U.S. and European efforts
to control disinformation. In short, he has the perfect background
for our discussions this evening, so it's now my pleasure to introduce Dr. Woolley. Thank you, Eric. And thank you to the club for having me. It's great to be here. Great to be speaking on this topic. I'm excited today
to talk about how we can reduce the power of false narratives,
what we can do to fight back, as it were,
and where the solutions lie in this space. If you're like me, for the last several
years, you've heard a lot about misinformation and disinformation. I think it's almost unavoidable. You've heard about conspiracy theories. You've heard about Russian manipulation
during elections. And so you're probably concerned
about the information ecosystem. Maybe you're concerned
because you have children. Maybe you're concerned because you worry
about the state of the climate and the state of people's health. Disinformation and manipulation of the information
ecosystem affect all of these things. And so you're right to be concerned. But today, what I'd like to do is inject
a bit of hope into this conversation. And of course, I am
going to talk about some difficult things. I'm going to go over some things
that we need to know in order to have a common conversation here,
which I think is really important. These days, many times
people are just talking past one another. And so today
we're going to set the stage a bit. But as Eric said, today's talk is on reducing
the power of false narratives. And let's get right in. The talk outlined today is is as follows I'm going to tell you
a little bit of a story has an ethnographer
most of my work is story based. Cliff Clifford
Geertz called it deep hanging out. And that's what I do. A lot of times
people think that I am a technologist, a computer scientist, data scientist. I'm not. I'm an ethnographer. I spend time with the people who make
and build these technologies. And most specifically,
I spend time with the propagandists who leverage these technologies
and attempts to manipulate public opinion. So I'm particularly focused myself
on the production of manipulation,
the production of propaganda. It's been a very weird ride
in the last ten years of doing this work, but I've learned a lot. I've learned a lot about the intentions
of the people who build these things. I've learned a lot about why they do what
they do, how they do, what they do. And in so doing, I've learned a lot
about how to combat what they do. And so the project of my team,
of all the great researchers that I work with, really, is to provide solutions
technical, social policy, legal,
all of the above in order to to do more. After
I tell you a story, I'm going to tell you a bit briefly about my team
at the University of Texas at Austin. I'll tell you about some key ideas
and terms. We'll talk about this concept
of a little bit of a refresher from a past talk, the demand for to see it
and why it exists and why it's important to understand that demand in order
to understand the solutions. Computational propaganda
has been a huge topic of my work. I'm a coauthor of a of an edited volume
called Computational Propaganda, in which we study this phenomena
in multiple countries. And I will define it for you. And then we're going to talk about
free speech versus safety, because at times
we we seem to be told that we can either have one or the other these days. And I don't believe that I'm going
to push back on that idea. And then we're going to talk about
the way forward. And we're going to just, you know, spend most of our time
at that in that last category talking about the solutions. So first, a story about manufacturing
consensus online. You heard Eric mentioned
that it's the title of my next book that's available now for preorder,
but it's actually building upon the work of Herman and Chomsky
and that idea of manufacturing consent, which actually has a much longer history
in the study of propaganda and manipulation of thought,
the spread of false narratives. In fact, Walter Lippmann arguably one of the most famous scholars
of the 20th century, coined this phrase
the manufacture of consent, and said that it was the prerogative
of the powerful to manufacture the consent of the people
through control of our media system. Not much has changed except for the fact
that we now have social media, which are not one to many,
but many to many, and therefore mean means that the scale,
the size of these campaigns and where they can spread
and how quickly has changed vastly. And it also means
that almost anyone can be a propagandist if they have the little knowledge,
a little bit of knowhow. One such person is Hernan. Hernan is is a self-professed digital growth hacker. He spends his days
working on new and devious ways to market to clients online,
with a focus on recruiting social media influencers
to endorse particular products. His specialty the product. Hernan most often works to promote
is politics and political belief. Specifically,
he works to create authentic looking campaigns, interactions
with candidates and causes. The key here is authentic looking. They're not actually authentic
most of the time. In truth, the support that Hernan drums drums up for his clients
is anything but authentic. He is not an activist
engaging in community organizing. He doesn't recruit actual organic,
grassroots political supporters for the work
that he does to support a common cause. Instead, he traffics in what he calls like
exchanges. So getting one person
to like content for another person and so on and so forth, so that you trick
the algorithm into thinking it's popular or reaction exchanges. So commenting on particular
kinds of things in the comments section to create the illusion
of traffic and amplification. Hernandez in his early thirties
and he spends most of his days in a small office
staring at a computer screen. And this is all taking place in Mexico
City from his chair. He recruits people across
multiple social media sites to essentially rent out their profiles
for money. He and his colleagues then take over these
accounts of these rising influencers, what my lab team calls nano influencers
under 10,000 followers or so, using them to like specific
political content, post comments, watch video stories, and vote in online
polls. Everything Hernan does is aimed
at lending politicians and other clients the illusion of large scale online
support, amplifying their popularity and artificially boosting attacks
on the opposition. His goal is to manipulate social media algorithms to categorize particular people or topics as trending, featuring the artificially boosted content
and getting in front of more real users and also more journalists
who then report on the story thinking that it's real
or report on the trend. Hernan works to create a bandwagon effect
for his clients, which is one of the things that you've
probably discussed in past talks to actually recruit
more real followers and adherents, because once the bandwagon
effects happens, more people glom on. Hernan is an art, a master in the art of what I call
manufacturing consensus. So we've moved
beyond the time of manufacturing consent where we're simply asking people to say,
Yes, it's okay to go ahead with your governance or the way
that you do things or your media as it exists and towards the manufacturing
of broad scale agreement that everything's okay. And we all agree we all agree that a particular
political candidate has viability. And suddenly that political candidate
actually has viability because of the false support
that they've had online. It's happening around the world,
not just in the United States. It's happening in particular in places
like India, Brazil, the Philippines, and throughout Europe and Africa. And so how do we study this stuff
while we study it through a combination of interview, of field work,
and of time spent in spaces? We're not in the business
of doing quantitative research per se, although we do do some of it
in complement to our qualitative research. We call
ourselves the Propaganda Research Lab. This is a team at the University
of Texas Center for Media Engagement, and it's a diverse team of people
working to do a lot of different things in support of building understandings
around the world of this stuff. We are comparative,
we are multi national in our focus. We understand
the Internet does not have boundaries, so why would we have boundaries
in our own research, we focus specifically
on emergent technology spaces. So we have a focus on things right now
like encrypted messaging applications
and virtual reality. The metaverse, which you see
so much in the news these days, in which billions of dollars has been
invested by the major technology firms. As the CEO of Apple
fondly said a few years ago, We want to replace the iPhone by 2030
with some kind of XR or VR. And so we focus on these sorts of things. Our work on computational react propaganda reveals the way
in which a variety of political and corporate actors and a variety
of other people leverage the social media ecosystem for their own means and ends. Our Computational Social Science
Division is run by Dr. Joe Luchita, who combines what we do
with our ethnography to to create, I think, a more holistic product,
a product that is not simply technologically
deterministic, saying there is a clear silver bullet
technological fix to this and saying instead different cultures, societies,
geographical regions, different spaces, terrains online
require different sorts of approaches. And we cannot just come at this
from one one direction. We must come at it
from multiple directions if we're actually to solve the issue. We have three major projects
and you'll see that echoed in our in the piece, in the talk
that I'm giving you today, which is the first is a project on encrypted chat
apps and propaganda, specifically looking at the ways in which signal,
WhatsApp, Telegram, Viber, these kinds of spaces are not only seen
by many as a panacea for the problems that we have at the moment,
because they're private, but also the ways in which
they're already co-opted by governments and the powerful in order to spread manipulative information,
particularly in places like India, where the BJP has a stranglehold
on WhatsApp, which is effectively many people's experience of the Internet
and their main form of communication. And so we push back against that notion of of of encryption
as necessarily a panacea. And we argue that in the United States and Europe,
we need to do a lot of thinking about how we build capacity in this space
so that as more people flock to signal and to the private spaces on Telegram
and at these other applications, they actually don't continue
just to see more of the same. And not only that, that this stuff
is hidden from researchers like myself. Matt has moved to go eat,
eat, eat, eat and to end encrypted across all its platforms
is exciting in many ways. It can protect democracy activists
around the world. It can protect us from some of the bad stuff we see,
but it can also create a black box. And so one of the things that we say is
we must think more about this. We also think about disinformation in US
diasporic communities because where the connective nodes
between some of the manipulation that we see coming, for instance,
during the 2016 election into the United States
via post-Soviet countries, via the Russian diaspora community,
but also talking to the Chinese diaspora community,
the Cuban American diaspora community, the Indian-American diaspora community,
and working to understand what their unique experiences
are on these applications, encrypted applications,
but also on Facebook, on Twitter, on YouTube, across the Internet,
and how crucially, they're fighting back. And I think that you're going
to be surprised by some of the counterintuitive things
that we found in these communities. It's quite heartening. They actually oftentimes circumvent
the traditional tools of reporting bad behavior and are building
their own sort of guerrilla capacity to respond to fake falsehoods and other things like that
through their own inoculation campaigns. And then lastly, the big grandiose thing that we think about at my team
is this concept of collective democracy. If you've been following politics
on the Internet now for any length of time,
you'll know that when social media first arrived,
many people said This is the savior of democracy
or it's going to usher in democracy around the world. We saw the Arab Spring,
we saw Occupy Wall Street. We saw times in which technology
could be used as a really beneficial tool to organize and communicate. But governments
quickly co-opted these spaces. So did powerful political actors
and corporations, and they normalized these spaces for control,
if you want to put a fine point on it, but not
all that goes on in these spaces is bad. There's plenty of good things that happen
on technology and via technology. My own son, for instance, is deaf and uses cochlear implants
and it allows him to hear the world. So technology is not all bad. The question is
how do we think towards the next space? How do we think towards the next stage? What does connective democracy look like
and how do we design platforms and technology
with connective democracy in mind? Here's our team. Just some bright faces
I wanted to show you because I do not do this work on my own. Our team is crucial to this work
and in fact many of them know much more than I do at this stage. So maybe one day they'll be up here some key terms quickly. This versus disinfo bots and algorithms
and computational propaganda. But first, a note on fake news. It's a term that we see
going around a lot. We're talking about
false narratives tonight. But one of the things
that I encourage people to think about a lot is the language that they use
when they speak about these things. The term fake news itself has been
co-opted for propaganda and manipulation. In fact, it's sort of spun out of control
and it's its original intention, which was to mean purposefully
false articles that were dressed up to look like real news from venues
that were meant to look like real venues like the Denver Guardian,
which does not exist but spread lots of fake articles during the 2016
election, is no longer the case. Fake news is a term has been
politicized, is a term that is used to attack the news
media and institutions. Whenever a politician with thin skin
doesn't like what they see or whenever, you know, a particular pundit doesn't like
what they see, they say That's fake news and it happens around the world,
so please don't use that term. That's the first solution. Very,
very small one. Let's use false news
or let's call it what it is. Let's say
that it's actually misinformation or it's disinformation
or it's mal information. Misinformation,
as many of you might know, is accidentally spread false information. It flows all over the Internet
and has since the Internet's been public and arguably before that, obviously
misinformation is as old as time. It includes rumors,
it includes conspiracies. But the key here is intention. People
don't intentionally spread misinformation. I myself have accidentally spread
falsehoods, even on social media. When I thought something
was particularly interesting because I thought it was interesting,
I said, Hey, everyone, look at this graph. And then someone said,
Hey, Sam, that's actually fake. It's kind of embarrassing given
my position, but it's happened. This information is purposefully spread,
false content. It is intentional. It is really the propaganda,
the heart and soul of propaganda. Disinformation is spread
by those in positions of relative power or that are attempting to pull people's
heartstrings. It oftentimes relies upon things
far outside the purview of logic, and it appeals to emotion
and it appeals to sensationalism, and it borders on conspiracy
and includes conspiracy. Much of the time disinformation oftentimes metastasize
and spreads into misinformation. So what begins as an intentional falsehood
planted or seeding among seeded, among a populace
then becomes misinformation and it becomes so difficult to track
and equally difficult to get rid of
because of the importance of free speech. And that's the thing
we're going to get into in a second here. One of the core tools that have been used
to spread this stuff over the course of the last ten years, 15 years are bots. You know, there's some stories out there
right now about bots. In fact, Elon Musk and Twitter are in
quite a contentious lawsuit about bots. And Musk has said that there's
too many bots on Twitter and that he doesn't want to buy it
so that 40 something billion dollars is going out. He wants it to stay in his pocket, but still, still play
a really big role online. They've kind of fallen
out of the zeitgeist a bit, but bots, you know, obviously
are infrastructural to the Internet. They play a core role in spreading
gathering information online. In fact, according to many surveys, bots produce
more web traffic than humans online. So there's lots of benign
and good bots out there, but there's also lots of bad bots,
if you like, illiberal bots that are doing things
that are anti-democratic. And the kind of bots
that we're most interested here when we talk about manufacturing consensus
are we talk about computational propaganda, are bots that mimic people
that we might call social bots for the purposes of manipulation, for the purposes
of spreading disinformation, it's quite easy actually. These days you really don't even know
how to need to know how to code to create a bot through various websites. I won't give the names
because I don't want you to go do it. But but you can do this
and you can build a bot that that runs a social media profile
automatically. And many people have figured this out. It's been
happening since Twitter was first created. But for some reason, people
really glommed onto it in 2016 and said,
Oh my gosh, the bots have invaded Twitter. I can tell you that in 2010
this was happening as well during elections,
but we just didn't notice it as much. Bots massively undermined the bottom line
of a lot of social media sites because they produce a lot of false
traffic. And advertisers do not like false
clicks and false views and false traffic, which is why Elon Musk has made
this argument that he doesn't want to buy Twitter because of this, but
also because of the problems that exist. At the same time,
you can't get rid of all bots. And so that's one of the things
I want to point out here. We do have a law in
California that attempts to attempts to control bot activity, but it's only it only is limited effective
because bots exist on the scale. There's nothing that stops a person
from logging on to Twitter or Facebook or YouTube and spreading their own content
on a profile that's often run by a bot. And so that free speech
question comes back into it. If a person is running the account
some of the time, then how do you delete it? And isn't the bot
just a proxy of a person? My answer to this is yes. The bot is just a proxy to a person. There's always a person behind a bot. It is just a tool. Algorithms.
I'm not going to go deeply into this. I'm by no means an expert on algorithms,
but algorithms are those. If this, then that pieces of code
that help to prioritize certain kinds of activities online. In the case of social media
making decisions about what you see and why, when we're talking about
trending algorithms or recommendation algorithms, bots get used to manipulate
algorithms oftentimes on social media. So do organized groups of people. Hernan The guy I was talking about earlier
understands us really, really well, and what he does is
he doesn't care oftentimes about his bots or his influencers
talking to people. That's actually a fallacy. That's not the correct way
of thinking about this. People often say,
I would never be tricked by a bot. People told me this all the time
and I'm like, Well, luckily, a lot of times
the bots aren't trying to trick you. What they're trying to do is trick the algorithms
which are built around quantitative metrics, saying that,
Oh, this looks very popular. There's lots of people tweeting about it,
there's lots of people spreading messages about it
and then reshare the content through the main page of the website
or on the sidebar and say, Hey everyone,
this is a trend everyone's talking about. Hashtag. One of the really sad ones was David
Hogg, crisis actor after the Parkland shooting, number one trend on YouTube,
but it was massively spread by bots. And what add up happening after that? Well, the algorithm prioritized it. YouTube spread it on their front page,
and then hundreds of news stories got written about that. And the zeitgeist went crazy with
this idea that somehow a shooting survivor high school student was actually an actor, which was completely false. Now, all of this comes together
to form what I call on my colleagues call computational propaganda,
which is the use of automation and algorithms in attempts to manipulate
public opinion through social media. So relying upon the underlying systems
that exist on social media in order to create these bandwagon effects that I've been talking about,
it's a new form of propaganda. Yes, propaganda is old. It is. It existed for a long time. And arguably,
with each new media creation, we see new, new forms of propaganda and we see it
sort of spread in unique ways. The Internet has really changed the game. What we see now is propaganda on steroids. And so computational propaganda takes up
this concern and thinks through the ways in which we can actually stop this problem
that exists at scale. As folks at the RAND Corporation have famously said in a in a paper
or famously, within my small world, you can't fight the fire hose of falsehood
with a squirt gun of truth. That is a truism, I think, when it comes
to computational propaganda. And so that is why we have to talk broadly
when we think about solutions in this space and certainly not advocate
for necessarily fighting fire with fire. Because what we don't want to do is create an Internet
that is just more full of noise, that is more full of distrust
and that is more full of garbage. Honestly,
because a lot of people demand this stuff. They want to see it and previous
talk, discuss the psychology of this. So I won't spend
a whole lot of time on it. But the way we think does
drive disinformation are need to belong. Our psychology, our our system of beliefs,
our parents, all of these sorts of things drive why we believe what we believe
and how we react. The pandemic has been challenging
and the pandemic has has has meant that a lot of people have been spending a lot of time inside and a lot of time
looking at things like this. And it's also meant
that a lot of people have felt lonely. And when I
when I spend a lot of, you know, in my when I'm actually able
to think about these problems deeply and I certainly have been able
to over the last decade, I think that one of the major reasons
that a lot of conspiracy theory flows and a lot of propaganda flows
in the form of misinformation is that people are lonely. They have a need to belong. Many of the people who spread this kind of
content are are pathologically lonely. They belong to these communities. And they say many of the things they say
because they feel misunderstood. They feel left out of society. I'm not asking people to feel sorry
for the reprehensible things that many of these groups say and do. But what I am saying is that if we want to bring them back into the fold, we actually have
to understand their psychology. We have to understand
why they do what they do. And I can tell you that shaming them
or that fact checking them will not work. Fact checking them after they've bought into a conspiracy
just solidifies their beliefs. They're already anti institutional. And so deceit is something
we must understand. We must understand the psychology. And if you'd like to understand
a bit more about this, there's a paper I wrote
for National Endowment for Democracy with my colleague
Katie Joseph that talks about why some people buy this stuff out,
why they seek it. And we use some case studies in Mexico
and North Macedonia to actually talk through
how this happens in particular contexts. And we go through some passive
and active drivers. This is like psych one on one stuff, but I'm not a psychology major
and so I found this stuff really useful in thinking through the solutions
to the problems that we face. And so this underscores
most of the solutions that I'm going to talk about here
in just a second. You know, I've mentioned the bandwagon effect,
but also things like belief, perseverance, effect, fact, the continued influence
of initial conclusions, sometimes based on false novel information
on your decision making and your individual beliefs
and so on and so forth. From there. By now, computational propaganda
kind of has a storied history, you know, like relative to the age of social media, computational propaganda
has been mentioned in the US Congress. It has been discussed
in, in places of high learning. It is a well-known thing. People might not call it by the same name. They might talk about it in
terms of influence operations or network propaganda or information ops,
depending on where they come from. But computational propaganda
has really spread and changed, and in order to understand the ways
in which we fight back, you have to understand
how these changes have happened. In 2016, concerns about
computational propaganda came to a head with the attacks from Russia's Internet
agents, Internet Research Agency, and what people called the IRA. I remember the first time
a reporter called me about the IRA. I was like, You mean the Irish
Republican Army is spreading propaganda? And I know the Russians. And I quickly learned during that year people lost a lot of hope
in our social media systems. They lost a lot of hope in our ability
to control foreign powers
and attempts to actually manipulate U.S. elections.
And we continue to see this today. And it's not just Russia that does this. You point out an authoritarian country
in the world and they are doing this. You also point to many democracies
in the world, and they're doing this to the computational propaganda project at
Oxford that I used to direct the research team of actually has a series of reports
that are all about global cyber troops that actually work to do this
professionally on behalf of governments. And at last count, I believe the last
report they did was a year or two ago. They said that 87 countries had official
capacity to do this kind of attack, and they were doing it
all around the world. What we've seen is a shift in the way
that computational propaganda works. In the beginning
we saw lots of very clunky bots that were, you know,
what they used to call on Twitter. Twitter eggs. They didn't. They just had the profile
picture. There was an egg. They didn't have any explanation
of who they were. Their name was JFK X, y, z. Da da da da da da da. And Twitter wasn't deleting them
for the longest long time. And so you could just buy 10,000
fake followers in the form of these Twitter eggs on Twitter or,
you know, fake profiles on Facebook. And you could use them to manipulate the
algorithm in the way that I was saying. But now we're seeing a shift towards
more sophisticated AI enabled bots
that actually can have conversations, that can do the chatting that
I was talking about earlier with people and actually show success in getting
people to change their minds about things. Despite what people might think,
we're also seeing a wider range of platforms
being manipulated through the use of automation
to do things like headless browsing bots. There's nothing that stops
you from building a bot that can just log on through the front
page of Twitter like a person would do. And so now people have gotten quite savvy to the fact
you don't need to use the API anymore. If you want to launch your bots,
you can go through other mechanisms. There is a continued focus, however, on two things,
and the two things I want you to understand
are the fact that bots organize groups of people,
what we call Astroturf campaigns. These like for like campaign ads, the use of nano influencers or influencers
to support your cause. There's a lot of people
paying them these days. What they're trying to do really is
amplify particular streams of information while suppressing
other streams of information. They want to amplify all the good stuff about their person
or cause while suppressing the bad stuff or the opposition. It means that oftentimes
there's a few simply positive stuff that exists out there about particular
political candidates or corporations or causes or celebrities, you name it. But it also means that there is a lot
of very horrible, horrific harassment, trolling, all of these sorts of things
that are used in attempts to shut other people up. Oftentimes
the suppression side of things is face is aimed at journalists, is aimed at
women, is aimed at communities of color. It is aimed at the diasporic communities
that I mentioned earlier. And because of
that, one of the things that I talk about when I think about solutions is the need
to protect these groups, the need to think through what it means to actually protect
these groups from these kinds of things. And so how we actually not only build safety into our systems and privacy
into our technological systems, but also how we actually work
alongside these communities to help them use the existing
infrastructure that they've built in civil society and in churches
and throughout their community in order to help them understand
the space better, but also, you know, support them through other means, too,
including financially and through through criminalization of things that are
should be a crime, like lying about how, when or where to vote, misinformation
about or disinformation about how, when or where to vote. That's a crime. You should not do that. You should not lie
about the outcome of elections. You should not lie about our political processes,
because when you do, you are misinforming people
about our democracy and undermining this whole great thing
that we're trying to do. This experiment. The other thing is you shouldn't you are not allowed
and should not be allowed and it should and is illegal to threaten people
and to harass them. And we see a lot of that,
but we don't see a lot done. So these attacks on marginalized communities
continue. I was a fellow at the Anti-Defamation
League for a year, and one of the things that we saw is that,
you know, communities that already lack a voice in mainstream politics are oftentimes the ones that are
most disproportionately affected. Of course, in the United States,
we don't have a hate speech law and I'm certainly not a legal scholar
and I'm not going to advocate for a hate speech law. But I do think we have to think
about these things more deeply. There have always been limits on speech
in the United States. Free speech is incredibly important, but it's not a carte blanche
to do whatever you'd like. It's not a carte blanche
to to attack people because you to threaten people with death,
because you you disagree with them or you think that they are somehow wrong. State non-State actors spread this stuff. So it's not just regular governments
or militaries or the people that we used to think of when we thought of propaganda
as being the folks who do it. We have to expand our vision
of who can do computational propaganda and propaganda writ large. The Internet allows everyone
to, quote unquote, be a journalist, right? You can spread
a, you know, information via tweet. You can be a citizen journalist. And some of that stuff actually causes
breaking news. Some of that stuff
is incredibly important. At the same time, those same tools
allow anyone nearly to spread pretty potent falsehoods
if they know how to do it well. And so until understand that it's not just
powerful, well-resourced actors that are doing this, we will not be able
to actually effectively combat it. Now, the big million dollar question before
we really get into some solutions here, free speech versus privacy and safety. It's the perennial problem. And I think that one of the things that,
as I said earlier, we've been told is that we can either have free speech
or we can have privacy and safety. And I reject that. I think that we've had media in the past
that have been able to promote free speech and been able to keep the people
private and safe. These are all on unalienable
rights in a democracy and should be and there should be ways in which
to design technology, make modifications to it, and also to support society
that help us to not always be saying any violation of any kind of violate any kind of moderation of an online sphere
is somehow in violation of free speech because we've moderated all media for a very long time for the sake of our well-being
and democracy. I'm certainly not an advocate
for censorship in any way, but what I am saying is that regulatory
bodies have overseen things like TV, radio, and we need regulatory bodies
to oversee the Internet, the Federal Communications Commission
and the Federal Elections Commission, by and large, in the last 20 years have given
up on doing anything on the Internet. They started to do a little bit more now, and also the Federal Trade Commission's,
another we should mention, but we've got to have more traction
in this space. And so, you know, this isn't an individual
call to you here on on here at the talk, but it's a it's a call
to the regulatory bodies to say, listen, you've got to think through these things. You've got to think through
some serious solutions and the way forward oftentimes seems really murky
because we think about these things. It's easy to go down
a very dark rabbit hole and think there's no solution to the problems that exist
because the cat's out of the bag, the cat is out of the bag. The Internet is not going anywhere. You can't really get rid of it. It's infrastructure exists
around the world. And and there's incredibly
sophisticated technology to the great now that we rely upon oftentimes
satellites for connectivity. But what can policymakers, civil society, educators and you do
what can you do as an individual? Well, there's lots of things there's
there's lots of solutions out there. There are social solutions
that we have to consider. We have to have conversations about
our education system in the United States. We have to have conversations
about how we teach critical thinking and media literacy. And we have stop being allergic
to these sorts of things. We have to make sure that our children
are exposed to knowledge about the fact
that there are people out there that are attempting to manipulate the conversation
and that are doing things is just as subtle as changing the framing
of a news article from a young age, I went
to public school in the United States all the way through high school, and what I can tell you
is that I never really encountered the idea of critical thinking
until I got to college. Maybe my English teacher might have
mentioned it a few times, but it wasn't. It wasn't. There was no class on critical thinking. There was no class on media literacy. There was no class on
how to navigate the Internet. And this remains true in public education
around around the world these days. It's changing slowly. We've got to do better. There we are. We've also got to do better
in a variety of other social spaces. We've got to do better
in supporting the vulnerable communities I mentioned earlier. We've got to do better at actually helping journalists
to understand how to report on this stuff so that they're not duped again
and again by Mr. Disinformation. There's actually a great article called The Fire Hose of Falsehood
by by a Colleague. And it talks about the ways in which journalists have to be careful
when it comes to manipulating, when it comes to reporting on information
that might be intending to manipulate them. Now, technical solutions are oftentimes
the ones that are the most interesting, particularly to people in the Bay Area because of Silicon Valley being nearby
and the storied history here. But technical solutions are oftentimes,
to me, the most short term solutions and I have on this short
term, medium term and long term, I think that we have to think through
very carefully how prescriptive we are
with the technology that we attempt to solve these problems with
and how how how we how universal we are in the fixes
that we attempt to provide technology can be absolutely amazing, but it is only as strong as the social
and legal solutions that undergird it. And so when it comes to the medium
term and long term, I tend to look to the legal solutions,
the regulatory solutions, the social solutions and then
to the technical, technical solutions. Because to me, the technical solutions
at the moment are solutions of how can we get ourselves to a place
that we need to be, where we figure out the kind of failures of institutions writ large to respond
to these kinds of problems, and also the massive decline
in trust in institutions, not just in the United States,
but around the world. And until we do that, any technological solution
that we come up with will fall flat. I was with a friend yesterday at Berkeley,
UC Berkeley's Haas School of Business, and he was talking about this idea
that I want to just pass on to you, which is this concept of hyper
stimuli as people. What he was telling me is, you know, evolutionarily we've developed to
when we when we were trying to evade predators and whatnot
and we needed massively needed nutrients. We developed in such a way
that when a food tasted good, we knew it was good for us. We needed fat,
we needed protein, we needed these things. Now, today, we live in an era
where fast food has been developed to taste good, despite the fact
that it's not very good for us. In fact, it's quite bad for us. The analogy is similar
when it comes to the information ecosystem we've
developed in such a way that we think that when we feel good, when we read something,
that it must be good for us, that it must be beneficial,
that it must help us. And this is a big problem because hyper
stimuli, this concept of conspiracy theory, sensationalism,
the latest stuff about X, Y, Z, celebrity, look at this page,
look at this page, look at this page. The gamification of the Internet. All of these things are intended
to make you feel good, despite the fact that a lot of what you're engaging with
is not actually good for you and one of the biggest solutions
in this space is actually to train people
how to react to hyper stimuli. There's been people that have talked
about this idea of building a more healthy information ecosystem, not just designing for eyeballs
on the screen, but designing for democracy and human
rights and also people's wellbeing. And I think it's a very important
discussion because conspiracy thinking is like getting off an exit too soon. On the road to critical thinking. You think that you are doing research,
you feel like you're engaging, you feel like you're going deep
and you're doing your due diligence and you're finding out new
and interesting things. The problem is it exists absent
an apparatus like the scientific method, like empiricism,
the ability to verify and see something. And so we've got to retrain people
to understand that when you're doing critical thinking, you are not engaging
in research, absent methods. And so there is a lot of education that has to come into this. There's several different resources
I just want to point you to quickly. One is first drafts. You know, what they call sheep. Look at source, look at history. Look at emotion, look at pictures. As an individual, these are simple things
that you can do online before you before you share mis or disinformation. Make sure that you check these things
before you share something. Look at the source,
look at where it came from. Look at whether or not
it has any evidence behind it. Use Google Reverse Image Search to see if maybe that image
of a shark swimming through a city street during a flood is actually fake
and has appeared in many other many other reports. My old project,
the Computational Propaganda Project, it's now called the Dem Dem Tech Project at Oxford
has this awesome thing for you called the com prop navigator that actually will
expose you to many tools that you can use to learn more about disinformation
and computational propaganda, and actually how you can help yourself
to avoid amplification of junk news. How you can help others as a as a previous speaker
here in this series said, you know,
we have to learn to talk about talk with the people who disagree
with us politically. And we've studied this empirically
at the Center for Media Engagement, and we've done some really interesting
surveys about what works and what doesn't. So if you're trying
to have those difficult conversations with family members
who might have bought into QAnon or bought into conspiracy
theories, this is for you. And a few last thoughts we have to move towards
designing for democracy and human rights. When it comes to our technology,
we have to ask ourselves questions about what it means to encode
small d democracy into these tools than just eyeballs on the screen. There's a great book
that just came out from the Oxford Studies in digital politics
called Designing for Democracy. I haven't read it, but I can't wait because that is
a really important question. We also need more public interest
technologists. We need more people to actually understand
the technology that are going to DC and helping to write the legislation
that we so sorely need. Because right now many of the bills that
are coming out are actually quite bad. They exist absent oversight
from technologists who can help to understand whether or not
they're actually viable to implement. GDPR has its own
whole own host of problems in Europe, and it's
because I think they did not do their due diligence and including public interest
technologists in the conversations. There's a number of resources here that like to point you to before I wrap up
here, um, Center for Media Engagement,
a few other folks that are doing really cool work in this space,
mostly universities. So my bias obviously, and you know,
I also have some books of my own, the reality game as mentioned,
manufacturing consensus is now available for preorder and I have one on bots
and one on computational propaganda. So if you'd like to check those out,
of course, go ahead. And the last thing I'd like to say to you
is thank you for for for coming
and for listening and for watching. And I'm going to turn it over to Eric. All right. We have actually a couple of
additional slides we're going to show you. And these are also going to be posted
in the archive as well as being, of course, on the video. So we have a few here, some books. Some of these are actually free downloads, some references
about detecting false information. And some general articles that were quite interesting
are some some references. And resources about education. So Stanford University Civic online
reasoning, a set of courses that can be used. The Young Skeptics, then
a series on constructive communications. And notice that near the bottom
we have some groups that are working
to bring people together to say, how can we find commonalities
in these groups where we're not just yelling at one
another? And finally, some really dense technical
reading at the end. So if you really want to, you know,
go to sleep without using any sort of artificial stimulation substances,
reading one of these, some of. Your friends. I'm sure, will put you to sleep in
no time, but they're actually are quite good and good sources
for further information. So with that,
we're going to go to some questions. I've got a few that were sent in
and we've got some now online. And remember, just write them in the chat
and and we'll get to them. So we're going to start off. So we sit down there. Yeah, we'll start off with one that I have
which is: are are there actions
we can take as individuals? So our personal interaction
on Facebook, etc., are what's counterproductive is just engaging
with people generally a bad idea. It raises the score for some algorithm
or whatever. You know,
I'm certainly not a fan for saying I don't believe we shouldn't engage. I think that one of the crucial things
in a democracy that we must do is engage. I understand that
fights on Facebook quickly devolve into name calling and anger. And so I'm not sure that I would say
engaging with someone that you know is a gadfly or, you know, is just there
to provoke people is a good idea. But when you can substantively talk
in a connective fashion, I think that using the online sphere in
that way is perfectly fine. But I think you have to make decisions
about when not to engage you. Also have to make decisions
about when not to get on social media. One of the things that I do myself is, is delete
all the applications off of my phone so that I can only log on to them
through the browser. So it's just too clunky on my cell phone
to do it. So it stops me from being on there as much
as I normally would be, which is a lot when I have the applications on my phone,
I can tell, you know, I've done an official study,
but it's it's a difference of hours. The other thing is, is as individuals,
I think that we have to have conversations with our family, with our kids,
with our with our grandparents, you know, about their own habits
and about the ways in which they engage. And we have to teach them in
whatever way we can gently to understand the ways
in which they are preyed upon. And some of those conversations
are really difficult conversations, particularly when someone really disagrees
with what you're saying. What I can tell you, though,
is that the research shows that the really the best way
to change someone's mind about something is for
is to talk to them as a loved one. When you have a relationship with someone,
you have quite a lot of power. You might not think so,
and you might think that it's hopeless, but you stand a lot better of a chance
of changing someone's mind than the Associated Press. Even though the Associated Press does a
laudable job at producing nonprofit news. I think that individuals, we really have
to have those difficult conversations. There's so many more things I could say,
but please. That came up in a pre in our previous talk
about talking to somebody who, you know, had fallen into one of these
these rat holes the idea of trust. Yeah that as a family member,
as an old friend, you have trust. Yeah. You respect the other person,
you ask them questions. You don't just say, boy,
you're an idiot because as you mentioned earlier, it's not going to work. Well,
you can kind of pretend you're a roger and or like a Freudian psychologist
and just say, tell me more. Ask lots of questions. Ask a lot of questions that will get you to a place where you can actually have a conversation,
even if the results of the questions that you ask are strange
and you don't agree with them, give them the space to talk through them
and then ask more questions. And I think sometimes you'll find
that you actually make headway. Yeah. And they eventually change their own mind. Yeah. Yeah. We have one question that came in. What are career pathways to contribute to the effort
to protect the populations that you mentioned that are vulnerable
to computational propaganda and establishing criminal penalties? You mentioned one earlier. Great, great question. So this idea of public interest
technologists is one that can be come out
from a variety of angles. Joan Donovan, who's
who's a fantastic researcher in this space who studies disinformation
also from an ethnographic lens, mostly studies the far right in the United States, and white supremacy
and right wing extremism says something that I really like,
which is that we've built the plane. We have the plane now. We need to build the airport now. We need air traffic control now. We need security. We need all these things around it. We already have some of these things,
but all of the jobs that are going that are being created are kind of, you
know, helping towards this infrastructure. So if I were a young person, I would think to myself,
what are the things I really like? Am I? I do. I love debate. If I love debate,
then maybe a career in the law. Looking at technology law
would be a really good space to go into. Right now, I think there's going to be
a proliferation of jobs all about regulation online,
carrying out the law online, you know, whether it's in the space of social media
or in the space of robotics. Similarly, you know,
if you're interested in engineering, I think there's a lot that you can do towards entering this
this a career as a public interest technologist,
taking computer science classes, but also challenging yourself
to take humanities classes as well. Because through the humanities
you will get the philosophy that you need to sort of challenge
the idea that you only need to focus on one problem at a time,
that actually everything is is connected. There's lots of companies out there
that are hiring in this space, their security firms like Bellingcat
and FireEye that are doing great work. So if you wanted to go work for them,
you could if you want to be an academic, you could. We also need a lot more great journalists,
and there's a lot of innovation happening in the journalism field. You know, of course, everyone knows about spaces like ProPublica,
which have done great investigative work, but there's also emergent
platforms like Grid, which are producing great new news
aimed at a younger audience. But that is investigative,
that has a lot of a lot of tenacity and is more nimble and built
for the digital environment. And so, you know, the possibilities are endless. And I really hope more young people know
that, you know, if you want to go into a career,
go into a career that's focused on things
like trust and safety online and. And education, train the trainer or, you know, high school
and elementary school. Oh, yeah. Development
of courses and the training trainers and teachers
and how to, you know, give those courses. That's a phenomenal point. We are
where we're at a tremendous disadvantage in this country and a situation
where our teachers should be treated a lot
better than they are. But I think if we begin to do better
and if more people begin to go into this space and fight for it, that it could possibly be arguably
the most important space of any of this. Europe. Do anything in that area or. Some countries do. Yeah. Some countries have more robust programs for helping to educate children
about media literacy. Unsurprisingly, many of the Scandinavian
countries have more systems for this. Denmark. Finland come to mind. But. But no one has developed a great system. I think Estonia, you know, which thinks
a lot about technological issues, particularly because it uses
a lot of threat from Russia, has developed a lot of tools
in this regard. Thomas Ilves, the former head of state of Estonia, has spoken
quite widely about these sorts of things. We have another question here. Why do people in their culture
resist media education? You know, you want to teach them. And then they were like,
we don't you know, if it's not reading, writing arithmetic,
we don't want to hear about it. Yeah, that's a fantastic question. And I think that there's
a lot of things at play here. One is that there's been a systematic,
systematic dismantling of trust and institutions in the United States
and around the world for many years. So the propagandists
that we're talking about are very open with the fact that they've purposefully
worked to undermine public trust
in education, in medicine, in governance, because it creates a power vacuum, a space where they can enter
and be the authoritarian leader that we so for to give us the direction that we need. It's a sad state of affairs. And so one of the things
we've got to start doing is rebuilding
trust in the media, rebuilding trust in journalism, in academia,
medicine, etc., so on, so forth. And until we begin doing that in a concerted way and thinking
about this very systematically, I think that will continue face
a lot of challenges. The other thing is people are rightfully in
some ways skeptical of the media, right? Like Herman and Chomsky wrote manufacturing consent as a book
that was about the ways in which the media was beholden to the powerful in many ways. And we still face that today. I think there's many news media
organizations that you can probably name online those tuning in that are beholden to powerful interests
on both the right and the left. And so we need more independent
media voices. I mentioned the Associated Press in sort of a joking way earlier,
but they're phenomenal. You know, we need more associated presses. We need more
we need more of the NPR's of the world. We need more of those kinds of
organizations that are actually operating as perhaps nonprofits
or from a position of more neutrality. True objectivity is not possible,
of course, like we all come with some objectivity. We are humans, you know, like we we have our own backgrounds
and no reporter is different. But we've got to fight for objectivity. The pursuit of objectivity is
what's important. Being transparent, having clear ethics
guidelines, and abiding by those ethics guidelines
when you write news stories is so important,
and we've seen a lot of that go out the window in the last 20 years. You know, I I saw an earlier article that talked about how with the failure of local media, you know, it used to be
you knew the local reporter. He's the guy who showed up, you know,
Cub Scouts and, you know, T-ball games. And so there was some trust. And now with the failures
and loss of local media, there's not so much trust anymore. So how do you you know,
can there be groups or you know, how can social groups
help to start restoring this trust? How how do we restore this trust? That's a really great question
and it's a really great point I do think that social media companies
bear a huge amount of responsibility for a lot of the failures
in the journalism system. Of course,
journalism should have been more nimble. Media should have been more nimble in responding to the creation
that was the Internet. Simultaneously, however, social media
companies massively benefited from all of the content that was being
created by these news media organizations. They shared them in their news feeds. They weren't paying the local media organizations
when they were showing them in their feed. And so one of the things I've argued,
I think kind of provocatively in Wired is that we need kind of like like we had
the tobacco master settlement plan in the eighties. We kind of need a social media master settlement plan to try to reinvest
in local media organizations in a way that is actually substantive because there's still people
that want to do that work. But most of those organizations
have shuttered. And so one of the things we do
need to think about is new financial models like, you know,
moving out of the advertising driven model and towards more subscription models
and other things like that. We got to get creative. Yeah, we have another one that came in just more a site question,
so we'll try it anyway. How much influence does preconditioning
a child to accept authority through faith play in the adult? Refusing to accept reason is an argument. You're preconditioned that young age, that faith is. Well, yeah, you know kind. A faith like anything else is not a
one size fits all thing. And so there's many systems of belief within systems of belief
that actually prioritize higher learning. You know,
the Jesuits are a classic example of this and there's similar examples in the
in the Muslim and Islam community
and so on and so forth. The problem is in
many communities of faith there, there is a tendency towards extremism
and there is tendency towards hiding from truths or from science
that is inconvenient as Al Gore. So, so yeah. And so, you know, Preconditioning matters. I do think we're a combination
of nature and nurture. I you know, I certainly don't think that we're only nurture by any means,
but nurture does matter. We do see in the cycles and in the academi absent any kind of like rebellion or leaving your family
or being excommunicated from your faith has a huge impact
upon what you believe later in life. And we see this all the time. I see this at UT all the time. You know, we have parents,
we have kids who come from a background where their parents are deeply Catholic
and they come to school and their way of relating lots of things is through
their experiences in the church. And similarly,
we have students who come from a household that's atheist and they have
a completely different perspective. The great thing about education
and especially a liberal arts education, is you get them all in one classroom
and you get them arguing and you get them talking through
why they think certain things. And it's like that, you know,
that Socratic, Socratic method, right? It's asking questions, answering them. And there is a beauty to that. And it's it's honestly one of the reasons
why I wanted to become a professor. So we have one final question. Sure. That is, how can humor be used? I mean, does that help break down
the barriers? Does that work or is that not work at all? What do you think about that whole. Yeah, I was laughing over this
a couple of minutes ago. Some folks, yourself included asking that,
you know, you all probably saw that birds aren't real. Right? News that was reported
is as high up as in The New York Times. And I think I think it's fantastic. My colleague, Katie Joseph,
who I wrote that demand for deceit paper, has said to me again and again again, Sam,
we need to do research on satire. We need to do research on humor. It could be an unlikely antidote
to a lot of the problems that we face. And I think she's very right. We see some of this happening
and we've seen it for a long time. Of course, Jon Stewart and Stephen Colbert
and these kinds of people built their careers
on this idea of satirical news or of humor surrounding current events. I think we need of it. Honestly, I'd love to see more humor. I'd love to see more satire injected. And I mean, think about the most bombastic person, you know,
think about an authoritarian leader making of them taking the wind out
of their sails, giving a little ribbing, not taking them seriously is
the worst thing you can do to them, right? An authoritarian person,
a dictator not being taken seriously. I mean, it completely ruins
their whole line of line of acting. So, yeah, we need more humor. More jokes, right? Okay. Yeah. Our gratitude to Dr. Sam Woolley for being with us today. And we also think that we should thank our audience as well as those listening
to the recording. And now this meeting of the Commonwealth
Club of California commemorating its 119th year of enlightened
discussion, is adjourned.