Get the latest news that you care about most in your inbox every week by signing up for our newsletters.
Opinion
The following is presented as part of The Columbian’s Opinion content, which offers a point of view in order to provoke thought and debate of civic issues. Opinions represent the viewpoint of the author. Unsigned editorials represent the consensus opinion of The Columbian’s editorial board, which operates independently of the news department.
Back in 1996, in an effort to encourage the growth of the new “internet,” Congress passed Section 230 of the Communications Decency Act, immunizing the new platforms from liability for their users’ posts.
If you ask Google, which owns YouTube, they’ll tell you that the law provides absolute immunity from liability for actions — including terrorist attacks — that are arguably provoked by violent videos recommended by its algorithms.
If you ask the parents of a girl killed in a terrorist attack supposedly recommended by a YouTube algorithm, they will tell you that YouTube and its parent, Google, should be held responsible.
In this strange debate, or, rather, in this debate with strange bedfellows, liberals are arguing for less free speech and conservatives for more, a function of the fact that the platforms have banned former President Donald Trump and his ilk, to the great consternation of conservatives, but not done enough, according to liberals, to ban hate, white supremacy and right-wing lies.
Two hundred plus years of jurisprudence interpreting the First Amendment, which protects free speech, has made clear just how difficult it is to draw lines in this arena. Speech is powerful. It does provoke. And in the marketplace of ideas, the truth does not always win out; sometimes, hate, ignorance and evil attract more adherents. But recognizing that speech can be dangerous doesn’t tell you how to draw lines or where the lines should be.
The test for incitement — falsely crying, “Fire!” in a crowded theater — requires that the threat of violence be imminent, which is not an easy test to meet, especially if it is being applied before anything goes wrong. Hindsight may be 20/20, but the platforms attempting to engage in foresight have no such advantage.
Should Section 230 be modified or eliminated, so as to subject platforms to tort standards of liability? Those tort standards would still require plaintiffs, such as the parents in the case before the court, to prove a “but for” causal connection between the offensive videos and the terrorist acts. In theory, that is not an easy standard.
The traditional challenge for civil libertarians is to assume that the power to decide which speech is allowed is not in your hands, but in the hands of someone you disagree with entirely and ask yourself how much power that person should have. Thus, the traditional argument for free speech is that no one can be trusted, or should be, to decide for us what we should or should not hear.
Lies about the dangers of vaccines, which could cost children’s lives? Should those be protected? Why?
Violent videos spewing hatred based on religion or race or ethnic origin? Should hate be protected?
Calls to overturn democratically elected slates of electors? Is there a place for them online?
Section 230 makes it easy to avoid these questions, but the Supreme Court, in agreeing to hear the latest case, is not taking that easy route.