<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=192888919167017&amp;ev=PageView&amp;noscript=1">
Thursday,  November 7 , 2024

Linkedin Pinterest
Opinion
The following is presented as part of The Columbian’s Opinion content, which offers a point of view in order to provoke thought and debate of civic issues. Opinions represent the viewpoint of the author. Unsigned editorials represent the consensus opinion of The Columbian’s editorial board, which operates independently of the news department.
News / Opinion / Columns

Other Papers Say: Curtail spread of fake news

By The Philadelphia Inquirer
Published: February 27, 2023, 6:01am

The following editorial originally appeared in The Philadelphia Inquirer:

About 500 hours of video gets uploaded to YouTube every minute. The online video-sharing platform houses more than 800 million videos and is the second-most visited site in the world, with 2.5 billion active monthly users.

Given the deluge of content flooding the site every day, one would surmise that YouTube must have an army of people guarding against the spread of misinformation — especially in the wake of the Jan. 6, 2021, insurrection that was fueled by lies on social media.

Well, not actually.

Following recent cutbacks, there is just one person in charge of misinformation policy worldwide, according to a recent report in the New York Times. This is alarming, since fact-checking organizations have said YouTube is a major pipeline in the spread of disinformation and misinformation.

YouTube is owned by Google. The cutbacks were part of a broader reduction by Alphabet, Google’s parent company, which shed 12,000 jobs in an effort to boost profits, which were around $60 billion last year.

YouTube is not the only social media company easing safeguards put in place following the Russian disinformation campaign that helped elect Donald Trump in 2016.

Meta, which owns Facebook, Instagram and WhatsApp, slashed 11,000 jobs last fall and is reportedly preparing more layoffs. Those cuts came as Facebook, which made $23 billion last year, quietly reduced its efforts to thwart foreign interference and voting misinformation before the November midterm elections.

Twitter implemented even deeper cuts, laying off 50 percent of its employees days before the midterm election in November. The cuts included employees in charge of preventing the spread of misinformation. Additional layoffs in the so-called trust and safety team occurred in January.

To be sure, the First Amendment makes it difficult to regulate social media companies. But doing nothing is not the answer. The rise of artificial intelligence to create sophisticated chatbots such as ChatGPT and deepfake technology will worsen the spread of fake news, further threatening democracy.

Policymakers must soon strike a balance between the First Amendment and regulating social media.

Texas and Florida have already muddied the regulation debate by passing laws that will upend the already limited content moderation efforts by social media companies and make the internet an even bigger free-for-all. The U.S. Supreme Court put off whether to take up the cases, leaving the state laws in limbo for now.

Meanwhile, the European Union is pushing forward with landmark regulations called the Digital Services Act. The measure takes effect next year and aims to place substantial content moderation requirements on social media companies.

The spread of misinformation and disinformation is a growing threat to civil society. Social media companies can’t ignore their responsibility.

Loading...