This video is sponsored by Dashlane. Whenever I get on the internet these days,
Iām almost always immediately overwhelmed with the feeling that social media ā was
a mistake. Now Iām being slightly facetious here, social
media has allowed human beings to connect with each other and share information to an
unprecedented degree, which I think is amazing. Because of social media, I can instantly and
effortlessly communicate with my friends and family across the country, and even across
the globe. We donāt have to wait until the local news
comes on TV to learn about whatās happening in the world, these days we often find out
before they do. We can do everything from keeping up with
entertainers and artists we like, toā¦planning a raid on Area 51. Itās not all bad. But even though weāre so connected, thereās
still something very distant about social media. Perhaps, no matter how instant or convenient
communication becomes, it will never quite replace face to face interaction. And it seems to me that social media sometimes
really seems to bring out the worst in people. In civilized society weāve developed unspoken
rules of decorum. Things most everyone agrees are polite. Some of these rules are arbitrary and antiquated,
and they can vary from culture to culture, but in general, most human societies value
things like kindness, and respect, and honesty, and putting others needs before your own. Most of that tends to go out of the window
on social media, at least to a larger degree than it does offline. According to a 2017 survey, 41% of American
adults have experienced online harassment, and 66% have witnessed harassing behavior
directed at others. Almost 1 in 5 Americans have been subjected
to severe forms of online behavior, such as threats of violence and sexual harassment. And that proportion is even higher for women
and people of color. Even people who are normally nice and unassuming
can find themselves saying and doing some harsh things online. Iām the f*ckin HAKO guy and Iāve even
found myself being a little rude every now and then. Some people think that the lack of face-to-face
interaction enables us to avoid thinking of each other as complex human beings. Instead weāre just a profile on a screen,
so we donāt feel the same natural empathy that we normally would. Others more cynically assert that social media
has simply shined a light on the true nature of humanity. The distance and quasi-anonymity of social
media means we can get away with being awful sometimes. The implication is that this is how we would
all behave all the time if there werenāt any social or legal consequences. I donāt know which one is more accurate,
but weāre not going to try to figure that out in this video. Instead weāre gonna think about how we might
make it a little bit better. Hi, Iām T1J. [WEIRD VOICE:] Follow me! So I want to come out the gates and say this
is one of those T1J videos where I donāt really propose one coherent conclusion. This is just kind thought vomit intended to
get you thinking, and to get a conversation started. Ok? We good? Good. So, one of the things that I try to do with
this channel is to encourage people to make an effort to be decent. I think being a good person is harder than
the alternative, so it often takes a little bit of effort and willpower. Iām a big advocate of personal responsibility. At the end of the day, youāre in control
of how you behave. And people like me can try to convince you
to be kind and honest and so on, but I canāt make you do it. I still think itās worth trying though. And weād all like to think that thereās
this small group of maniacs who are the ones making social media difficult to enjoy. But in reality, I think itās most of us. [JASON:] Itās us. [T1J:] Some of us just have bad days, but
some of us do it habitually. However, simply hoping that people arenāt
shitty to each other is not going to make much of a difference in and of itself. We still have to wake up every day and interact
with people, both on and offline. And Iād rather we try to take some action
toward making social media more pleasant and more safe for everyone. --
Most social media platforms do in the form of some sort of community rules that we all
agree to follow when we create our accounts. For popular websites, these rules usually
involve things like ādonāt harass people, donāt post porn or gore, donāt threaten
nuclear war with North Korea. Different sites, of course, have different
levels of strictness. Some will give you a ban over the smallest
offense, and others have close to no rules whatsoever. The question is, do these kinds of guidelines
actually work? And Iād say they probably do, to some extent. Witnessing the evolution of social media has
been very enlightening for someone like me who has gone on record as a staunch advocate
of free expression. Of course I understand that when you have
completely free speech, some people will use that freedom to be justā¦the worst. But many advocates will respond by saying,
āWell thatās the price of freedom, god damnit! America!ā Of course a company that owns the technology
gets to make their own rules, but one of the questions that these companies have to answer
is, āHow free should speech be on our platform?ā For example, if Twitter made it against the
rules for users to criticize President Trump. I think most of us would consider that too
large an assault on free speech, and weād probably avoid using Twitter. But where should the line be drawn then? Twitterās a great example because neutrality
and free expression have been guiding principles for the site since the beginning, arguably
more so than most other well-known platforms. And like I said, when it comes to free expression,
you have to take the good with the bad. And Twitter did that for a long time. But over the past few years, Twitter has received
a lot of criticism for failing to address instances of threats, hate speech, bullying,
harassment, and other forms of harmful communication. So Twitter has made several changes to their
guidelines over the last couple of years. Whether or not you agree that these changes
have been effective, itās clear that companies like Twitter are beginning to understand that
thereās more to this whole social media thing than just free-speech. Twitter is a cautionary tale for what happens
when you put free-speech above everything else. The problem is that when you take the bad
with the good, the bad is always so much louder and more potent. And weāre not just talking jerks on the
internet, weāre talking harmful conspiracy theories, hate speech, bigotry, and credible
threats of violence, and sometimes, actual violence. When there are clear rules against this type
of behavior, youāre inevitably going to see less of it. You may or may not be able to think of a couple
of platforms that donāt have these kinds of rules, but letās just say they can be
quite scary. But this of course, requires A. clear rules
to begin with that B. actually serve to combat harmful and violent speech and C. enforcement
of those rules. And the size of many of these platforms, makes
large-scale enforcement really hard to do, which is why most of them rely heavily on
user reports. I donāt think anyone expects 100% perfection,
but I think itās fair to expect a reasonable response when those reports are received,
especially when they become viral public discussions. Like last year, Apple, Facebook, and YouTube
purged their platforms of content created by Alex Jones and his website Infowars, which
is well-known for spreading false and harmful conspiracy theories. Twitter, on the other hand, initially refused
to purge or ban Alex Jones, claiming he hadnāt broken any of their rules. Many individuals and outlets provided examples
of what appeared to be clear breaches of Twitterās conduct policy, but it wasnāt until a month
later when Twitter would officially ban Alex Jones and his related accounts, forā¦continuing
to do what heād been doing the whole time. YouTube has historically been known for its
unclear community guidelines, especially with regard to their hate-speech and harassment
policy. But recently theyāve actually done a fairly
decent job of clearing it up on their support pages. The problem is, they either donāt consistently
enforce their own rules, or they might just have bad rules. On the page describing YouTubeās hate speech
policy, thereās a list of examples of, quote: āhate speech not allowed on YouTube.ā I actually sad-laughed a bit while reading
this, because Iāve seen almost all of these said nearly verbatim on YouTube, largely by
people who havenāt been punished. Earlier this year, Carlos Maza, a writer and
host for Voxās YouTube channel, complained on Twitter about targeted harassment and hate
speech he had been receiving from Steven Crowder, a conservative YouTube commentator. You can look up the details yourself, but
Crowderās comments I think unquestionably fit YouTubeās own definitions of harassment
and hate speech as written by them. However, YouTube decided not to take any action
against Steven Crowder for those comments. Their main rationale seemed to be that while
bigoted harassment is normally against the rules, itās okay as long as itās couched
within a political opinion. And you could make an argument that this is
actually reflected in the community guidelines. It says, āDonāt post content on YouTube
if the āPURPOSEā of that content is to encourage violence or incite hatred against
specific groupsā¦Donāt use slurs where the āprimary PURPOSEā is to promote hatred.ā This seems to imply that itās okay to use
bigoted slurs against someone as long as promoting hatred is not the primaryāPURPOSE--of the
content. And thatās true of Crowder, arguably. His primary PURPOSE ostensibly was to debate
a political opinion. He just threw some racism and homophobia in
there as the icing on the cake. Now again, YouTube is free to make their rules
however they like. But if the goal is to curb harassment and
hate speech, this is a really poorly conceived exception to the policy. And to be honest it really explains how so
much absolutely vile political content, far worse than Steven Crowder, has been allowed
to remain on YouTube. Harassment and bigotry delivered as part of
a political opinion does not magically cease to be harassment and bigotry. What a bad rule. So if our goal is to make social media more
pleasant and more safe, making clear and proper guidelines and then enforcing those guidelines
seems to be pretty important. But things like threats, harassment, and bigotry
are not the only things that make social media feel unpleasant and unsafe. How do we deal with behavior that still causes
us distress but doesnāt rise to the level of breaking the rules? You could make stricter rules. I honestly would be interested to see how
that would work out on a large scale. On my personal Discord serverāwhich you
should totally join by the wayāI have pretty strict rules compared to large social media
apps. Itās obviously against the rules to threaten
or harass people, but itās also against the rules to be an asshole, in general. But itās a small community thatās been
cultivated by the type of content that I make, so I almost never have any real problems. I wonder what would happen if a large public
platform just outright banned trolling and being an asshole. There would be a lot of borderline cases and
a lot of people would complain about their free speech or whatever, but maybe it would
result in a friendlier and safer environment. And maybe all the people that insisted on
being terrible would just go somewhere else. I donāt know, Iām not sure itās been
done before. Correct me if Iām wrong. I strongly believe that free speech and expression
should be legally protected. You shouldnāt go to jail or experience violence
simply for expressing yourself, even if that speech is offensive or repugnant. But I donāt think people have a right to
a platform, and I donāt think the right to speech implies the right to be heard. If someone chooses not to open their platform
up to you, thatās their prerogative. Of course, some people might have differing
views about what qualifies as trolling or being an asshole. Some people think that they should be allowed
be an asshole if they think the person theyāre an asshole to deserves it. Any hypothetical platform might create rules
that are so restrictive that intelligent discourse is rendered impossible. But I dunno, itās hard to imagine that any
useful idea canāt be expressed in a respectful way. Maybe Iām missing something. Another thing thatās typically not found
in most platformsā community rules is holding people responsible for their followings. Of course, directly inciting your followers
to harass or threaten people is usually a no-go, but figures are rarely punished for
the harmful actions of their followers even if those actions are clearly inspired by them. This is a tough issue to tackle to be fair,
because you never know what your followers are gonna do, and it would be unfair to blame
you for something someone else did. But like, if you make a video complaining
about how awful someone is. And then your audience immediately goes and
harasses that person. It seems very clear that the harassment was
inspired by you, and wouldnāt have happened if you didnāt make the video. So thereās some level of responsibility
that should be attributed to you, especially if you donāt make a good faith effort to
discourage your community from participating in this kind of behavior. And donāt get me wrong, Iām starting to
understand how communities can radicalize in ways independent of the people they form
around, but Iād say only certain types of creators even have an audience that would
consider behaving in that way in the first place. The streaming platform Twitch is notable for
being one of the only platforms that directly suggests a responsibility of creators towards
their communities, and threatens punishment to creators themselves for the actions of
their followers. Itās vague, but itās something. And since we know that public figures can
inspire toxic behavior in their followers, whether directly or indirectly, this is something
that needs to be solved. Enforcement of these policies is usually handled
by giving users tools such as the ability to block people, as well as the company itself
handing out punishments such as limited access, temporary suspension, or outright bans. This mimics the way justice is carried out
in most situations where there are rules or laws that people are expected to follow. You do bad thing, we make you suffer in some
way. This kind of retributive punishment is designed
not only as a deterrent for future offenses but also a way of purging problematic content
from the site. The effectiveness of this kind of system is
limited though. In order for something like a suspension or
a ban to work as a deterrent, people need to perceive losing access to the platform
as an undesirable consequence they care enough to avoid. For someone like me, or Alex Jones, losing
access to our social media platforms is very undesirable as social media is a significant
aspect of our careers. For the average anonymous troll, it doesnāt
seem like they would really care that much. So one approach is to facilitate harsher consequences
for these kinds of infractions. Facebook is notable for requiring users to
sign up using what is presumably their real names,
although some people do circumvent that. and often people are connected through their
close friends and family members through Facebookās network. Facebook also encourages people to provide
information on their profiles such as what city they live in and where they work. This kind of information makes people much
more likely to experience personal consequences for toxic behavior online. Whether imposed by Facebook itself, or by
the people and groups who track you down using the information youāve put on the internet. A person who makes a threat of violence, for
example, is likely to receive more severe consequences on Facebook, than on Twitter
or in a YouTube comment. What if instead of being suspended from a
website, it wouldnāt let you log in until you donated a certain amount to charity or
to the person you harassed? Do you think that would be a bigger deterrent? Just a hypothetical idea, I understand that
money based punishments disproportionately affect poor people. But I wonder if there were a way to implement
a more āclose-to-homeā punishment, would that have a larger effect. What if you werenāt allowed to log-in until
the offending content was shown to your employer or your grandma? You donāt wanna disappoint Granny and Paw-paw! Of course, these would be hard to carry out,
and would have varying results, but you get the wavelength Iām on. But that calls into question the efficacy
and ethics of retributive justice in the first place. Thatās something thatās been hotly debated
for a long time. Some think that bad people deserve retribution
regardless of whether or not itās effective at curbing the behavior. As you might have guessed, I prefer a more
utilitarian approach. I want to know what strategy is going to result
in the best outcome for everyone involved. A proposed alternative is restorative, or
rehabilitative punishment. This approach aims to get offenders to take
responsibility for their actions, and to give them an opportunity to redeem themselves. And in fact many studies show that restorative
measures have a high likelihood for positive results. One meta-analysis from 2007 suggests that
when compared to conventional punishment, restorative justice was not only more likely
to reduce repeat offenses, but also to reduce distress in the victims. When applied to harmful behavior on social
media, this could entail something like requiring offenders to complete an online sensitivity
course, or correspond with a counselor. A moderated dialogue could be opened up between
the offender and the victim. All of these options are of course made more
complicated by the fact that most popular social media platforms are free to join, and
new accounts can be made in literal seconds, which allows bad actors to circumvent punishment. And restorative justice specifically can only
happen if victims and communities are willing to accept the possibility of their harassers
being redeemed. Which I admit seems unlikely given the social
media atmosphere. Which brings up another point. The atmosphere. If a person gets a gun and commits a crime
with it, obviously that person should be held responsible for their actions. But itās still worth discussing how the
gun made it so easy for them to pull it off. Likewise, on social media people should undoubtedly
be held responsible for their own behavior. The guy from the previous analogy is not holding
a gun to your head and forcing you to be awful on the internet. However we should talk about how the nature
of many social media platforms encourages toxicity. Before the age of the internet, āoutrageā
is emotion that was relatively uncommon for the average person to experience. Every now and then youād see a news headline
or hear a bit of gossip that got the blood boiling a little. Like did you hear that Bob Johnson from down
the street left his wife for a younger woman? I mean really, who does he think he is? But life would quickly return to the mundane. And even today, itās not something we tend
to experience all that often in the rare moments that we are disconnected from our devices. But as soon as we put those screens back in
front of our faces, weāre assaulted with plenty of opportunities to be outraged. And outrage is one of those emotions that
is both addictive and contagious. To a large extent, this seems to be human
nature. Humans are the only species who express moral
outrage and the only one that seems to enjoy punishing other people for perceived wrongdoings. Even our closest animal relatives, chimpanzees,
donāt behave this way. Punishing other people literally makes us
feel good. Thereās a lot of ideas about why that is. Some researchers think that we punish others
to subconsciously advertise our own righteousness, to āvirtue signalā as they say. Thereās also the fact that cooperation was
no doubt an evolutionary advantage for early humans, so perhaps punishing those who we
perceive as non-cooperative is hard-wired into our brains. But as I implied, in everyday life, weāre
faced with few opportunities to actually carry this out. At least until we log on to social media,
where we can be bombarded with a literally nonstop content feed, thatās sure to provide
many justifications for us to express moral outrage. Because, unlike when youāre offline, you
donāt have to personally confront anyone face to face. If you see someone do something you donāt
like outside in public, you could try to shame or expose them, but thereās a risk attached. Online, thereās far fewer consequences. Also, in person, thereās less of an opportunity
for you to be socially rewarded for your heroic deeds. On social media however, you get Likes and
Favorites, and Retweets, and Reblogs, and Shares, and Cry Laughing emojis. And like I said, this is contagious. Studies have shown that content that causes
moral outrage is much more likely to be spread than other types of content. In fact, each emotional word in an online
message increases the likelihood of it being shared by 20%. And of course, these companies have noticed
this. They have a financial interest in presenting
you with content that encourages you to engage and click and comment. So most of them have developed algorithms
that that fills your feed with stuff that they think youāre likely to engage with. And quite often, thatās content that makes
you mad or outraged. And so it becomes a cycle. So a large part of the responsibility for
solving this problem lies on social media companies. These algorithms are significant factors in
whatās driving the division and toxicity online. These companies could also create technology
or basic features that discourage knee-jerk outrage. What if you had to wait at least 30 seconds
after reading a post before you could reply? Research has shown that taking time to cool
down diminishes our inclination to be cruel to others. What if an algorithm was created that could
detect when someone was being or about to be rude online, and automatically locked their
account for a short period? Again these are just ideas to get the conversation
started. Donāt get me wrong, sometimes outrage is
good, and we want it to spread virally. The exposure of Black Lives Matter and the
#MeToo movement are examples of justifiable outrage that was spread largely through social
media. Or like when we all lost our shit at that
terrifying CG Sonic the Hedgehog. But it can quickly get out of hand, even when
the outrage is legitimate. Calling someone out for saying something shitty
is often a reasonable thing to do, but thousands of people doing it very quickly becomes indistinguishable
from bullying. I could really talk about this for a long
time, thereās so many different factors to consider. But ultimately itās going to take a shift
in both user behavior and technology to improve the state of social media. And it will no doubt be difficult to make
either of those things happen to a significant degree. But the first step is to at least talk about
it. DAS JUS ME DOE. What do you think? Thank you for watching, and thank you to Dashlane
for sponsoring this video. Dashlane is the ultimate tool to help you
stay safe online. Worried about losing access to your accounts,
having weak or reused passwords, worried about hackers, concerned about somebody monitoring
your internet history? Dashlane has you covered with a myriad of
tools as your disposal, such as a password manager, autofill for personal info and payment
details, a VPN with country selection for safe private browsing, and dark web monitoring
to see if your data is being bought and sold on the dark web. Thereās a free version with basic elements
but Dashlane premium gives you access to all these benefits at a cheaper price than other
security services that have less features! Dashlane does all in one package what you
would normally need 3 or 4 different tools to do. If youāre not convinced yet, Dashlane has
graciously allowed me to offer my viewers a FREE 30 day trial of Dashlane Premium. Just go to dashlane.com/t1j and you can see
all of these features in action, and try out Dashlane for yourself. If you like it, make sure you use the coupon
code T1J at checkout for 10% off your purchase. And remember, by supporting sponsors like
Dashlane you not only get access to a great service, but you also support me and allow
me to take my content to the next level.
The problem is capitalism, and it's unsolvable so long as the primary motivation for every social media company is to maximize revenue by maximizing views and engagement while minimizing labor costs. They're strongly incentivized to do as little actual moderating as possible and look for a happy medium where they can keep most of the nazis without losing the normies.
Jesus christ, I'm too old to be turning into a communist.