- Thanks to CuriosityStream for keeping LegalEagle in the air which now comes with Nebula for free when you use the link in the description. Well the president has now
gone to war with Twitter because Twitter is applying warning labels to some of his tweets. It's 2020, so that pretty much checks out. As you may know, President
Trump is a fierce critic of speech that disagrees
with his own viewpoints and don't take my word for it, take his. - See, I don't think
that the mainstream media is free speech either because it's so crooked,
it's so dishonest. So to me free speech is not
when you see something good and then you purposely write bad, to me that's very dangerous speech. - But as president he
believes that he has the power to shut down private companies who distribute comments
that he doesn't like. He tweeted, "Republicans feel
that social media platforms "totally silence conservative voices. "We will strongly regulate
or close them down "before we can ever allow this to happen." And President Trump has toyed with issuing an executive order targeting online tech companies for years. But now that Twitter has
started putting warnings on some tweets that the
President tweets out, the dam appears to have finally broken. The American President
issued an executive order purporting to dictate how online platforms like Twitter, Facebook and
Instagram regulate speech online, which of course brings
up the age-old debate between platforms and publishers, the Communications Decency Act Section 230 and the First Amendment. So the question is what
did the President just do and can he do it and does he
understand the First Amendment? (patriotic music) Hey LegalEagles, it's time
to think like a lawyer because there may not be a
single topic more misunderstood than freedom of speech. So let's start at the beginning. The first one, the First Amendment. As you may know, the First Amendment protects freedom of
speech among other things. Now the biggest misconception
about freedom of speech via the First Amendment
is that it restricts what the government can
do to private individuals. The key phrase in the First Amendment text is that "Congress shall make no law." This means that the First
Amendment is directed at stopping the government
from censoring speech or compelling speech. Private companies, generally speaking, can do whatever they want
as it regards speech. Now of course there are exceptions to this carved out by the Supreme Court for some limited categories of obscenity and creating imminent lawless action but these are the tiny exceptions to the general rule
that the First Amendment prohibits the government from restricting private individuals or
private companies' speech. And the term freedom of speech
has a specific legal meaning. Freedom of speech is a negative
right against the government that prevents the
government from punishing most kinds of speech. Freedom of speech does not
apply to private companies, generally speaking. And also, generally speaking, it prevents the government
from forcing individuals to make speech, that is
called compelled speech. So as a general rule, as
one commentator put it, "The Constitution protects Twitter "from the policing of a president. "It does not protect a president "from the policing of Twitter." And similarly that gets into censorship which is a different
side of the same coin. Censorship is when the
government restricts or prohibits certain kinds of speech,
it's a governmental action. Private companies might delete comments or moderate content and you might think
that that's a bad thing and you might have great
arguments for that, but whatever that is, it's not censorship, at least not in the legal sense. Nor is censorship facing the repercussions of one's own speech. The First Amendment and freedom of speech doesn't shield you from
the social consequences of the speech that you made. So with that very general background about the First Amendment
and freedom of speech, that takes us to the
Communications Decency Act. Now you might have heard of the terms platform or publisher before. Generally that comes
from a certain section of the Communications
Decency Act Section 230. So if you've ever heard someone
refer to 230 or Section 230, what they're probably talking about is the Communications Decency Act, including the President who
just recently tweeted out that we should revoke 230,
without providing any context. Now if you've ever read
the First Amendment, you might notice that it
says nothing about platforms or publishers or
responsibility for defamation. That's because it relates to the CDA which was passed in 1996. Now we'll talk about what
the CDA Section 230 does in just a second but I think it's worth
going over the history of the Communications Decency Act because the history that led to the founding of this particular law is incredibly fascinating and it harkens back to the very
beginnings of the internet. Now in the early days of the internet, there were no laws regulating
the content on the internet and it quickly became apparent that if people posted whatever
they felt like saying, some inappropriate and illegal things could make their way onto the web. Someone could post child pornography, some people could post
incredibly defamatory statements, and not to mention people posting things that just violate someone's copyright. And lawyers and legislators had to grapple with whether there would
be liability attached not only to the person that
posted that information but anyone who hosted that information. Now traditionally, content
distributors were not liable if they distributed
something that ran afoul of one of the areas where
speech is restricted or has legal liability, and I'm using the term
content distributors incredibly loosely, here we're talking about
historically stores and newspapers. Now in the 1950s, the
Supreme Court overturned a Los Angeles ordinance that said, "If you have obscene
material in your store, "you can be held criminally responsible "for selling that paraphernalia." In particular, a bookstore
owner was convicted of violating the ordinance
by selling a novel called "Sweeter than Life"
which tells the story of a quote "ruthless
lesbian businesswoman." The store owner was
sentenced to 30 days in jail for simply selling the book. The Supreme Court said that the bookstore couldn't be responsible for reviewing every single thing that
was in the book or magazine that it happens to sell. Fast forward about 40 years with the invention of the internet, we now have a similar
but different situation involving ISPs and
people that post content on their websites. Now two of the first ISPs,
CompuServe and Prodigy, were both sued for hosting forums where users posted defamatory content. Prodigy, for example, billed itself out as
a family-friendly ISP, so it moderated comments that
were hosted in its forums, removing things that it thought were bad. Now CompuServe, on the other hand, didn't moderate anything in
its forums and as a result, the courts found that this
distinction was important. But both CompuServe and Prodigy were sued for the allegedly defamatory content that was posted in
their respective forums. Now, the case against
CompuServe was dismissed because the court
considered that CompuServe was merely a content
hoster and a distributor rather than an author or a publisher. As a distributor, CompuServe
could only be held liable for defamation if it knew
or had reason to know of the defamatory nature of the content, but since CompuServe wasn't
moderating its forums, it had no reason to know what material was posted to those forums. But things turned out
differently for Prodigy, the company that wanted
to be family-friendly. A post appeared on a money talk forum claiming that the investment
firm Stratton Oakmont committed criminal fraud in
connection with a stock IPO. Yes, that Stratton Oakmont, the one that gave rise to the
film "Wolf of Wall Street." - Gentlemen, welcome to Stratton Oakmont. - I can't imagine why someone thought that Stratton Oakmont
was committing fraud. But the firm Stratton Oakmont sued not only the anonymous poster who posted the allegedly
defamatory content, that in retrospect was probably
not defamatory in any way, but it also sued Prodigy
for hosting that content and a court held that Prodigy was liable as the publisher of the
content created by his users since it exercised editorial
control over the messages on its bulletin board. And unlike CompuServe which
was compared to a bookstore, Prodigy was considered to be more like the editor of a
newspapers opinion page. In essence, Prodigy was liable because they created content guidelines, proactively enforced those guidelines and used software designed
to remove offensive language. And under this reasoning, Prodigy knew or had reason to know that the content was
quote unquote "defamatory" since they had those guidelines in place and were exercising their discretion. Now it's worth noting that these weren't Supreme Court cases, these outcomes aren't necessarily dictated by the Constitution, these are lower court opinions about the interaction
between ISPs and defamation. So these cases aren't
necessarily set in stone, but early internet companies
drew a very clear lesson from these lawsuits. They shouldn't do any moderation at all since that was the best
way to avoid liability. There didn't appear to be a middle ground between absolute lawlessness
and some kind of moderation. But before any of these cases
could go to the Supreme Court, Congress actually stepped in. Congress didn't like the fact that businesses were being held liable for the acts of their users. When Congress finally enacted the Communications Decency Act, they were doing that in response
to the moderation dilemma. Lawmakers wanted online service providers to be able to moderate content. At the time, they were especially worried about things like harassment
and obscene material, so they gave online providers
immunity from lawsuits if they moderated their content. And as an incentive for internet companies to create basic standards, the law granted them
immunity from liability if they simply adopted
standards and enforced them. The Communications Decency
Act was specifically created so websites didn't need to be neutral. So that takes us to the
controversial Section 230. The relevant portion
of the CDA Section 230 is actually very short,
clear, well-worded, and an understandable piece of law. If you go to Section 230 subsection C, it's called Protection for
Good Samaritan blocking and screening of offensive material. Sub one says "Treatment
of publisher or speaker. "No provider or user of an
interactive computer service "shall be treated as
the publisher or speaker "of any information provided "by another information content provider." And the term another
information content provider is incredibly important, that means another user,
a commenter, a tweeter, a person on Facebook, a
person posting to Instagram. So that's the general immunity that protects not only Twitter but someone that happens
to have a YouTube channel or a blog from being
liable for the postings of other users and content providers. But then subsection two
goes a little bit further and it specifically allows for people to moderate content on
their platforms or websites. It states that, "No provider or user "of an interactive computer service "shall be held liable on account of A, "any action voluntarily
taken in good faith "to restrict access to or
availability of material "that the provider or user
considers to be obscene, "lewd, lascivious, filthy,
excessively violent, "harassing or otherwise objectionable, "whether or not such material "is constitutionally protected or not; "or B, any action taken to enable "or make available to
information content providers "or others the technical
means to restrict access "to material described in paragraph one." So let's talk about these two provisions. Section 230 C1 establishes the immunity for third party liability under which a website is not
to be treated as the speaker or let's say publisher of the illegal or otherwise offensive
content posted on a website by a third party end user,
or what the CDA refers to as another information content provider. We'll summarize that as
you're not responsible for what a different
user posts to your site. Now section C2 also protects websites against any civil liability should sites engage in self-regulation or make good-faith efforts
to edit the website by blocking, screening
or restricting access to basically any content that they don't want on their website. So in other words, platforms
like Twitter, or Facebook, or blogs or YouTube channels, can't be sued for defamation on the basis that they had knowledge of defamatory or otherwise illegal content on the basis that they
moderated their content and we're going through
comments or other postings and removing ones that they didn't like, or didn't want on their website. And it's hard to overstate
the importance of Section 230 in the development of the early internet, and really as the internet
has developed to this day. Whether you agree with it or not, it is a powerful tool
that provides immunity and shields internet service
providers and websites from liability for content that they host, and we'll certainly get
into the differing opinions on whether 230 is a good
thing or a bad thing. But as a legal matter,
decades of court decisions have affirmed the robust immunity that is provided under
the CDA Section 230. And here are a few recent examples that show how Section 230 has been used as a shield by content providers. For example, a man sued
the Grindr dating app for negligence, products liability, and related claims after
someone impersonated him and posted fake profiles on the app. The court found that Section 230 barred claims against Grindr
for the design of its platform as within the purview of protected traditional editorial functions. Similarly, in Wisconsin a plaintiff sued a classified advertising website because the plaintiff wound up buying what turned out to be a murder
weapon from a private seller. The case was dismissed because the plaintiff sued the website claiming that the website was responsible for what someone else
had posted to the website and ultimately sold to that person. But the website didn't sell the gun, it just hosted the ad for it. So it wasn't responsible for what happened to go wrong with that sale. But here's a key point
that's often misunderstood. In both of those cases, the
person that wrote the posts that ultimately was misleading or illegal, they're responsible for
the legal repercussions of those posts, just like anyone that posts
defamatory content on Twitter is responsible for the defamation
that they may have done. The only thing that the
CDA Section 230 does is it exempts the
platform, or the website, from being legally responsible for the posting of some other
third-party content user. But by the same token,
websites and platforms want the flexibility to be able to remove those kinds of
postings if they so choose. If they find out that someone is trying to sell a murder
weapon in a classified ad, they want the flexibility to
be able to remove that post if they find out about it. So if you've ever wondered why websites have these incredibly long
terms of service contracts that effectively allow websites
to do whatever they want and remove whatever content
or users they want to, that's why. But obviously Section 230
has received criticism on both the left and the right. There are many people who
claim that Section 230 discriminates against
certain kinds of viewpoints, in particular, political viewpoints, and ironically my Twitter feed
is full of people on the left who claim that tech companies
are biased against their views and people on the right who
claim that tech companies are biased against their views as well. Certainly it's not surprising that when websites moderate
their content and their users, that people are going to claim viewpoint in political viewpoint discrimination. But as a practical matter,
the vast majority of people probably don't want
true neutral platforms. It's a place of anarchy
that's ruled by extremists and the hecklers. There's this concept
in First Amendment law called the heckler's
veto where, for example, you're not allowed to discriminate against a speaker coming
to let's say a public forum based on the reaction of the people that are going to view that speech. So for example, if there
was gonna be a speaker at a college campus and you
knew that the college kids were going to riots or create violence in response to this particular speaker, if you allowed that violence to dictate who can and cannot come
to the college campus, the hecklers are allowed to
veto any kinds of speakers, and so that's not a valid basis to discriminate against
someone's viewpoint in a public forum, that again, has nothing to do with a private forum. But you can imagine that
there's an analogy there to where if you're not allowed
to do content moderation, that is the extremists
and the loudest voices that are going to rule all
of these different platforms. Now there's a time and a
place for a 4chan or an 8chan, I just don't want to hang out there and I don't think most people do either. And the ultimate irony is that
if Section 230 didn't exist or it was revoked, and Twitter was liable for anything that its users posted, as a result they 100% would
delete Donald Trump's account and would not allow him to post or they would vet all of his tweets before allowing them to go public. And we focus on the big tech companies but it goes far, far deeper than that, it's every YouTube channel, every blog, any website that hosts comments. I asked my fellow internet
lawyer Richard Hoeg to talk about what would
happen if Section 230 were revoked or severely curtailed. - Hey LegalEagle, it's Rick from Hoeg Law, you might remember me
from such YouTube videos as COPPA, I just can't quit you; COPPA, it's not you, it's
me; and other things. As you rightly point out, CDA Section 230 is a foundational piece of
the internet superstructure and as a foundational piece, it's important not just to the Apple, Facebooks and Googles of the world, but also to places like ResetEra if you're interested in
talking about video games or Reddit if you're interested
in talking about video games. The language of Section
230 makes no distinction between large operators and small which means it protects everyone. But also that a change in
that law will affect everyone, and not to put too fine a point on it, but the FTC most recently,
while discussing COPPA, has intimated that YouTube
channel runners like you or me are responsible for the content that appears on those channels as if they were YouTube itself. So what does removal of Section 230 mean? It means that places like
Reddit or your channel or mine might have to be responsible
for every comment that appears on them. You might have to be responsible for the comments that appear
on your YouTube videos, I don't want that, I
know you don't either. Now I'm not trying to
pretend that Section 230 or DMCA or COPPA
descended from divine writ and that we need to hold fast to every bit of the language that was offered, but I do think that we
need to be very cautious about any law that
affects this many people. - I think Richard's right. The stakes are incredibly
high and revoking 230 could destroy the internet as we know it. Definitely check out
Richard's YouTube channel that has great analysis about internet and entertainment law. So that's the basics about
the Communications Decency Act but misunderstandings about Section 230 abound on the internet, so let's take some time to dispel some of those misunderstandings. The first is of course what
about publishers and platforms? Journalists, politicians
and internet commentators frequently make a mistake when they talk about CTA
Section 230 immunity. They often say that these
internet companies or websites either are publishers and are responsible for the
content that their users post or that they should be
treated like publishers and held responsible for
what their users post regardless of what the law
currently is constituted. But that's not really the key distinction. Websites are already liable for the content that they create. A website like Twitter, or Facebook, or a newspaper like "The New York Times" can and often does create
contents and when they do that, they are responsible for
the legal repercussions of the content that they create. A website can be both
a publisher, an author, and a platform at the same time. As the law stands, they
just aren't responsible for the content of the
users or other third parties that post to those websites or platforms nor are they liable if they choose to moderate their users' content. The law currently recognizes
that some internet providers share some similarities to
edited print publications like newspapers in certain contexts, which are, under some circumstances, held liable for third party content. However, they also have
attributes of common carriers like telephone companies which serve as passive distributors of third-party information. Under Section 230, websites
are held responsible for the content that
they create themselves but not for the content of other people that's simply posted. So this division between publisher, platform or distributor
is kind of a red herring. And the terms platform and publisher get very, very confused. Really, when you think
of this distinction, you should think of it as the distinction between a host and an author. Hosts aren't liable
for other users content but authors are always responsible for the content that they author. Sometimes a website can be an author, and sometimes a website can
host other people's content, and sometimes a website can
do both at the same time, you have to look at the
particular speech in question to determine what
applies in which context. And there are two other concepts that are continually
made part of this debate that sort of get lumped into it. One is neutrality and the
other is the Fairness Doctrine. Now neutrality can mean different things in different contexts but in this context, Senator Josh Hawley has
introduced legislation that would amend the CDA Section 230 so that internet companies would lose their Section 230 immunity if they did not apply their rules in a politically neutral manner. This is obviously a response to the perceived anti-right-wing bias of certain tech companies. This is one of many amendments
that are on the table that haven't passed but
potentially could in the future. Query whether this sort of
viewpoint discrimination, in other words, not allowing
viewpoint discrimination passes First Amendment muster because the government
generally can't mandate that certain viewpoints
be allowed or disallowed. The First Amendment has
been around for a long time and the Communications Decency Act has been around since 1996, so to take this action to
change it is anti-conservative because you're changing the status quo. But that's a question for another time. Now the related concept
of the Fairness Doctrine harkens back to the
days of broadcast media like radio and TV which once had a legal duty
to air balanced content covering both sides of a
politically controversial issue. This was called the Fairness Doctrine. The idea was that because you were using the broadcast spectrum of which there was only a limited supply, the broadcast networks had a
duty to present both sides. Now Republicans fought for years against the Fairness Doctrine because they argued that
the balance requirement had a chilling effect on the delivery of the conservative message,
and finally in 1987, President Reagan repealed
the Fairness Doctrine. This was considered a huge
victory for freedom of speech because broadcast networks
were no longer mandated to carry certain messages and it actually led to the flourishing of things like conservative talk radio since stations were
free to broadcast views without offering other sides of the story. And while the original Fairness Doctrine only applied to broadcast media, those generally on the right who are detractors of the current makeup of the Communications Decency Act want to basically instill
a new Fairness Doctrine onto the public platforms
that hosts users content. So with that context,
that brings us to the war between President Trump
and Twitter's Jack Dorsey. During the last few weeks, the president has gotten
into several Twitter wars, first calling MSNBC's Joe Scarborough a murderer of a former staff member, prompting both Scarborough
and the family of the deceased to speak out against Trump's comments. - Vile people driven by
hatred and petty politics. - Though ironically,
since Trump's comments could get him sued for
defamation when he leaves office, Twitter probably would not be responsible 'cause they're immune under
the Communications Decency Act. Second, the president
has been bashing states that are promoting voting
by mail during a pandemic and as part of this feud, the president has made
some wild statements about predictions of voter fraud that would come as a
result of voting by mail which are most likely false. Now after considerable public pressure, Twitter decided not to moderate the tweets about the alleged murder
but did add some context to the tweets about voter fraud, and then recently President
Trump spent some time rage tweeting about
racial tensions in the US threatening to sic the
military on protesters and effectively threatening
to shoot looters. Twitter labeled those
tweets as promoting violence and added a warning gate
that you had to go through before you could actually
view the President's tweet. And however you may feel about the President's underlying tweets or Twitter's response, it's more or less
Twitter's decision to make and I would argue that
the language of the labels that they added to these tweets would constitute something
that was authored by Twitter. So the language of the
labels that were applied, if for example, those are defamatory, which I don't think people are arguing, but if they were, then
Twitter would be liable for a defamatory statement because Twitter did in fact
author those particular tweets. But that's a statement by Twitter itself as opposed to the
statement of a third party. But regardless, this action by Twitter pushed President Trump over the edge, leading to the long
threatened executive order against social media platforms
that he doesn't agree with. So let's look at the executive
order of the President. Now remember, an executive
order can't change the law. Now sometimes an executive
order can be very powerful because the President is in control of the entire executive branch
which includes the military and the entire administrative state, but where Congress has
written something into law, a presidential executive
order cannot change what Congress has done, but that hasn't stopped
the President from trying. And this particular executive order goes straight at Section 230 of the Communications Decency Act. The executive order starts
by making policy arguments about what it's trying to accomplish and make some statements about freedom of speech
in the First Amendment and remember, when we started this video that this largely is
not a free speech issue, this is speech that comes
from a private party, freedom of speech relates
to a negative right against the government. So for the most part,
makes the classic mistake of talking about freedom of speech when we're talking about the
speech of private individuals. Now the executive order goes on to attack the good faith requirement
of the moderation of websites and says that because effectively websites are making viewpoint discriminations related to political ideals, that they're not operating in good faith. Now that is the President's
opinion about what the law says, an interpretation of good faith, that is not how courts have
interpreted that requirement and that's not how courts
have made requirements for websites that actually moderate their particular comments. So this executive order does not carry the effect of law with
respect to what courts are going to assume about Section 230 of the Communications Decency Act and it's possible that Congress could change that requirement
and adopt the Holly Amendment but at the moment, they haven't, so this portion of the executive order doesn't really accomplish anything. But the order does go on to
say that all federal agencies should adopt the President's understanding of this requirement and
effectively curtail Section 230 and it asks the Department
of Commerce and the FCC to adopt rulemaking to
effectuate that decision. The FCC is supposed to be an
independent regulatory body, so it's not clear whether
they will adopt this or not, and generally the FCC has said it actually doesn't have the power to police the internet in this way. So we'll see what happens
with respect to that. The executive order also asked the FTC to consider website moderation
and unfair trade practice which might result in some action, but I have to believe that
if the FTC punished websites for viewpoint discrimination, they'd run headlong into a bunch of First Amendment lawsuits. So not clear how strong
that action would be. It also purports to create a
mechanism for public complaints about violations of
neutrality in moderation and asks for a working group related to state attorneys general to be able to view state laws to prevent this kind of moderation, though traditionally the
Communications Decency Act has been ruled to preempt
state laws as well, so it's not clear what, if
any, the state AG's can do in this particular context. Now interestingly, the
executive order also relies on a legal theory that internet platforms are the functional modern
equivalent of a public forum and therefore it's everyone's right to be able to access that public forum. The theory is that the
importance of social media in public discourse means
that social media sites should be treated as government actors subject to the First Amendment rather than as private entities. And if websites were
treated as state actors, then they would need to
follow the First Amendment just as the government does and just like the government, they couldn't ban, block or delete content based on the viewpoints of
that particular content. This of course is the
antithesis of Section 230 which encouraged private
companies to moderate content rather than let the internet
collapse into anarchy. But at the heart of this debate is whether websites
like Twitter or Facebook are the functional
equivalent of a town square or whether they're like a private house. Now a town square, generally
has been a physical location that generally was a public
good, publicly owned, and thus you couldn't restrict people from being able to access it. Whereas a private house,
when you own that house, you can restrict as
much speech as you want, so it's a question of that. And there are some cases that
in very, very limited contexts have applied the sort
of town square analogy to private places. For example, certain
malls and company towns that were more like a traditional
physical public location, have under some circumstances had those restrictions applied to them, but in the vast majority of cases where you have a private
company or a private individual whether it's a restaurant,
or a commercial building, or your own private house, you're not bound by the First Amendment and you can restrict or promote
whatever speech you want within the confines of
your own private property. And sometimes those lines get blurred, there are no easy questions in law. So for example, when a
government entity rents a hall for a public forum, they can't engage in
viewpoint discrimination even though it's private property, but it's sort of been
commandeered by the government, or when a governmental
official uses Twitter as the official mouthpiece
of that government agency, there have been a few cases that have said you're more like a government
than a private entity and that's in this particular
limited channel or username. But it's certainly been interesting to see not only Twitter's response
to the President's tweets but the President's response
to Twitter's response. - The choices that Twitter
makes when it chooses to suppress, edit, blacklist, shadow, ban are editorial decisions, pure and simple, they're editorial decisions. - After having some of his
tweets be warning gated, the President apparently
backed down a little bit and reversed course on
some of the statements that he was making and you
know lost in this whole shuffle is that Twitter actually
has a First Amendment right to express its opinion about what Trump is expressing in his tweets. Now like any individual
or company or person, Twitter will be
responsible for the content that they actually author, but it is of course related
to this larger context about content moderation
on websites and platforms and when a website is responsible
for that content and not. And there are certainly
advantages and disadvantages to having Section 230 on the books. It is largely responsible for
how the internet looks today whether you like that or not. And while it's probably a bad idea to repeal Section 230 entirely, what we really need is a
law that prevents YouTubers who give great legal analysis
from getting demonetized when they talk about sensitive subjects. But since that probably
isn't going to happen, my creator friends and I teamed
up to build our own platform where creators don't need to
worry about demonetization or the dreaded algorithm. It's called Nebula and we're thrilled to be
partnering with CuriosityStream. Nebula is a place where creators can do what they do best, create. It's a place where we can both
house our content add free and also experiment with
original content and new series that probably wouldn't work on YouTube. In fact, if you liked this episode, the version that I put on
Nebula removes this ad entirely and replaces it with
an extended discussion about the First Amendment and Section 230. Forcing websites to take
a speech-related action and that by its very
definition is anti-free speech. You'll get a whole extended rant that I'm not going to put
on YouTube and it's ad free. And Nebula features lots of YouTube's top educational-ish creators like Thomas Frank, Lindsay
Ellis, and TierZoo. We also get to collaborate in ways that probably wouldn't work on YouTube like Tom Scott's amazing game show "Money" where he puts a bunch of famous
YouTubers against each other in psychological experiments where they can either work
together or profit individually. It's amazing, it's wonderful. So what does this have to
do with CuriosityStream? Well, as your go-to source for the best documentaries
on the internet, they love educational content
and educational creators. And we worked out a deal where if you sign up for CuriosityStream with a link in the description, not only will you get CuriosityStream, but you'll also get Nebula for free and to be clear, that Nebula
subscription is not a trial, it's free for as long as you're
a CuriosityStream member. And for a limited time,
for less than $20 per year or $3 per month, you can get CuriosityStream
and Nebula together. And since we're spending a
lot of time inside these days, you might as well do it
with David Attenborough, or Chris Hadfield, or just watch Tom Scott torture
your favorite YouTubers. That's fine too. So if you click on the
link in the description, you'll get both CuriosityStream and Nebula for a ridiculously low price, or you can go to
CuriosityStream.com/LegalEagle. It's a great way to support this channel and educational content
directly for just $19 per year. So just click on the
link in the description or go to CuriosityStream.com/LegalEagle. Clicking on that link really
helps out this channel, I really appreciate it. So do you agree with my analysis? Should Section 230 still exist or should we repeal or amend it? Leave your objections in the comments and check out this playlist over here with all of my other real law reviews about pending legal issues on
the internet and otherwise. So just click on this video
and I'll see you in court.
This is a reminder about the rules of /r/media_criticism:
All posts require a submission statement. We encourage users to report submissions without submission statements. Posts without a submission statement will be removed after an hour.
Be respectful at all times. Disrespectful comments are grounds for immediate ban without warning.
All posts must be related to the media. This is not a news subreddit.
"Good" examples of media are strongly encouraged! Please designate them with a [GOOD] tag
Posts and comments from new accounts and low comment-karma accounts are disallowed.
Please visit our Wiki for more detailed rules.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
Submission Statement:
The lawyer discusses the first amendment, section 230, and how it relates to Trump's recent executive order that was directed at Twitter.
The lawyer provides good background information, and describes the legal aspects of the situation.
You don't get detailed analysis of this sort on your regular news network.