Facebook and Instagram discontinue the use of fact-checkers

Facebook and Instagram discontinue the use of fact-checkers

Meta has announced it will discontinue the use of independent fact-checkers on Facebook and Instagram, opting instead for "community notes" akin to the system employed by X (formerly Twitter). This change delegates the responsibility for verifying the accuracy of posts to users themselves.

In a video accompanying a blog post on Tuesday, CEO Mark Zuckerberg stated that third-party moderators had been "too politically biased" and emphasized a return to "free expression." Joel Kaplan, who is stepping into Sir Nick Clegg’s role as Meta’s head of global affairs, defended the decision, acknowledging that while the reliance on independent moderators was "well-intentioned," it had often led to the suppression of users' voices.

Critics, however, are alarmed. Ava Lee of Global Witness, a group advocating accountability for tech giants, accused Meta of pandering to Donald Trump’s administration. "Zuckerberg’s announcement is a clear attempt to align with Trump – at the expense of public safety," she said. Lee argued that Meta’s framing of this move as a step toward avoiding censorship was merely a political strategy to sidestep responsibility for the spread of hate and misinformation.

Meta’s existing fact-checking program, introduced in 2016, refers questionable posts to independent organizations for review. Posts flagged as false or misleading are labeled and demoted in users’ feeds. However, this system will soon be replaced by community notes, starting in the U.S. Meta clarified it has "no immediate plans" to phase out third-party fact-checkers in the UK or EU.

The community notes model, borrowed from X’s platform, requires users with diverse perspectives to agree on contextual notes or clarifications for controversial posts. Elon Musk, X’s owner, praised Meta’s adoption of this approach, calling it "cool."

Still, safety advocates and organizations have raised concerns. Ian Russell, chairman of the UK-based Molly Rose Foundation, called the move a "serious risk to online safety," particularly for vulnerable users. "We’re seeking clarity on whether this will impact content related to suicide, self-harm, and depression," he said.

Fact-checking body Full Fact, which works with Meta in Europe, denied allegations of bias and described the changes as a "regrettable step backward" that could have global consequences. Chris Morris, its CEO, warned of the potential chilling effects on accountability and truth.

In its blog post, Meta signaled a broader rollback of policies that had previously restricted discussions on sensitive topics like immigration and gender identity. "It’s not right that what’s acceptable on TV or in Congress is not allowed on our platforms," the post argued.

The timing of these changes has fueled speculation, given Zuckerberg’s recent overtures toward President-elect Trump. Relations between the two, previously strained, have improved, with Zuckerberg dining at Trump’s Mar-a-Lago estate in November. Meta has also contributed $1 million to Trump’s inauguration fund.

Zuckerberg described the recent U.S. elections as a "cultural tipping point" favoring free speech and expressed optimism about the shift. Meanwhile, Kaplan’s replacement of Clegg as head of global affairs has been interpreted as a signal of Meta’s evolving moderation strategy and political realignment.

Kate Klonick, a law professor at St. John’s University, remarked that this shift reflects a broader trend in tech governance. "We’re seeing a decisive swing away from trust and safety measures towards free speech absolutism," she said, attributing the momentum to Musk’s changes at X.

This development highlights the growing tension between content moderation and free expression in the digital age, with tech companies navigating an increasingly polarized political landscape.

The comments posted here are not from Cnews Live. Kindly refrain from using derogatory, personal, or obscene words in your comments.