Facebook 'supreme court' admits 'frustrations' in 5 years of work

Facebook 'supreme court' admits 'frustrations' in 5 years of work

Often referred to as Meta's 'supreme court', the Oversight Board began work in 2020
Often referred to as Meta's 'supreme court', the Oversight Board began work in 2020. Photo: Kirill KUDRYAVTSEV / AFP/File
Source: AFP

An oversight board created by Facebook to review content-moderation decisions trumpeted improved transparency and respect for people's rights in a survey of their first five years of work on Thursday, while acknowledging "frustrations" to their arm's-length role.

Facebook -- since renamed Meta -- announced the Oversight Board, often referred to as the group's "supreme court", at a 2018 nadir of public trust in the tech giant.

The Instagram and WhatsApp owner's image had been tarnished by episodes like the Cambridge Analytica data-breach scandal and dis- and misinformation around crucial public votes such as Brexit and the 2016 US presidential election.

The Oversight Board began its work in 2020, staffed with prominent academics, media veterans and civil society figures.

It reviews selected cases where people have appealed against Meta's moderation decisions, issuing binding rulings on whether the company was right to remove content or leave it in place.

Read also

Oil refinery shutdown could cost Serbia for years, experts warn

It also issues non-binding recommendations on how lessons from those cases should be applied to updating the rules for billions of users on Meta's Facebook, Instagram and Threads platforms.

Over the past five years, the board has secured "more transparency, accountability, open exchange and respect for free expression and other human rights on Meta's platforms", it said in a report.

The board added that Meta's oversight model -- unusual among major social networks -- could be "a framework for other platforms to follow".

The board, which is funded by Meta, has legal commitments that bosses will implement its decisions on individual pieces of content.

But the company is free to disregard its broader recommendations on moderation policy.

"Over the last five years, we have had frustrations and moments when hoped-for impact did not materialize," the board wrote.

'Systemic changes'

Some outside observers of the tech giant are more critical.

"If you look at the way that content moderation has changed on Meta platforms since the establishment of the board, it's rather gotten worse," said Jan Penfrat of Brussels-based campaigning organisation European Digital Rights (EDRi).

Read also

Meta starts removing under-16s from social media in Australia

Today on Facebook or Instagram, "there is less moderation happening, all under the guise of the protection of free speech," he added.

Effective oversight of moderation for hundreds of millions of users "would have to be a lot bigger and a lot faster", with "the power to actually make systemic changes to the way Meta's platforms work", Penfrat said.

One major outstanding issue is chief executive Mark Zuckerberg's surprise decision in January to axe Meta's US fact-checking programme.

That scheme had employed third-party fact checkers, AFP among them, to expose misinformation disseminated on the platform.

In April, the Oversight Board said the decision to replace it with a system based on user-generated fact-checks had been made "hastily".

Its recommendation from that time for "continuous assessments of the effectiveness" of the new system is currently marked as "in progress" on the company's website.

Last month, the Oversight Board said it would fulfil Meta's request for its advice on expanding worldwide the so-called "Community Notes" programme.

The company said it needed help "establishing fundamental guiding principles" for rolling out the scheme and identifying countries where it might not be appropriate, for example due to limits on freedom of expression.

Read also

Poor hiring data points to US economic weakness

AI decisions looming

Looking ahead, "the Board will be widening its focus to consider in greater detail the responsible deployment of AI tools and products," the report said.

Zuckerberg has talked up plans for deeper integration of generative artificial intelligence into Meta's products, calling it a potential palliative to Western societies' loneliness epidemic.

But 2025 has also seen mounting concern over the technology, including a spate of stories of people killing themselves after extended conversations with AI chatbots.

Many such "newly emerging harms... mirror harms the Board has addressed in the context of social media", the Oversight Board said, adding that it would work towards "a way forward with a global, user rights-based perspective".

Source: AFP

Authors:
AFP avatar

AFP AFP text, photo, graphic, audio or video material shall not be published, broadcast, rewritten for broadcast or publication or redistributed directly or indirectly in any medium. AFP news material may not be stored in whole or in part in a computer or otherwise except for personal and non-commercial use. AFP will not be held liable for any delays, inaccuracies, errors or omissions in any AFP news material or in transmission or delivery of all or any part thereof or for any damages whatsoever. As a newswire service, AFP does not obtain releases from subjects, individuals, groups or entities contained in its photographs, videos, graphics or quoted in its texts. Further, no clearance is obtained from the owners of any trademarks or copyrighted materials whose marks and materials are included in AFP material. Therefore you will be solely responsible for obtaining any and all necessary releases from whatever individuals and/or entities necessary for any uses of AFP material.