Research from the University of Edinburgh uncovers significant discrepancies in Facebook’s removal of posts related to the 2021 Palestine-Israel conflict, highlighting potential cultural bias and calls for more inclusive moderation policies.
A recent study conducted by researchers from the University of Edinburgh has shed light on significant discrepancies in how Facebook enforces its content moderation policies, particularly concerning posts related to the 2021 Palestine-Israel conflict. The investigation focused on 448 posts about the conflict, which occurred between 10 and 21 May 2021, that were removed by Facebook, part of the Meta company.
The research team included over 100 native Arabic speakers who reviewed each deleted post to determine whether it violated Facebook’s community standards and if, in their personal view, the removal was justified. Each post received scrutiny from 10 different reviewers to ensure thorough evaluation.
The findings revealed that 53 per cent of the deleted posts were judged by a clear majority—defined as at least seven out of ten reviewers—not to breach any platform rules. Moreover, for approximately 30 per cent of the posts, all reviewers unanimously agreed that the content did not violate Facebook’s guidelines. The remaining posts were found to have violated the rules and were thus deemed appropriate for removal.
Of particular interest was the study’s identification of Facebook’s AI moderation system frequently flagging posts supportive of Palestinians, even in instances where there was no hate speech or calls for violence. This has raised concerns about the cultural and linguistic sensitivities embedded within automated content moderation tools.
Dr Walid Magdy, from the University of Edinburgh’s School of Informatics and lead author of the study, highlighted a critical gap between Facebook’s enforcement practices and the perceptions of fairness among users from marginalised regions. He told The Herald (Glasgow), “This is especially important in conflict zones, where digital rights are vulnerable and content visibility can shape global narratives. If platforms claim to support free expression and inclusion, they need to rethink how they apply community standards across different languages and cultural contexts. Global platforms can’t rely solely on Western views to moderate global content.”
The study emphasises broader concerns regarding the dominance of Western perspectives in setting and enforcing moderation policies, which may overlook the nuanced cultural and linguistic context essential for equitable global content management. Researchers advocate for increased diversity in the teams responsible for these policies and call for enhanced transparency concerning how content is analysed and moderated.
The peer-reviewed research is set to be presented at the CHI 2025 Conference on Human Factors in Computing Systems and involved collaboration with experts from HBKU University in Qatar and the University of Vaasa in Finland.
At this time, Facebook/Meta has been approached for comment regarding the study’s findings.
Source: Noah Wire Services
- https://www.hrw.org/news/2021/10/08/israel/palestine-facebook-censors-discussion-rights-issues – Documents Facebook’s wrongful removal of Palestinian content during the May 2021 hostilities, aligning with the study’s focus on moderation discrepancies during the same conflict period.
- https://www.hrw.org/report/2023/12/21/metas-broken-promises/systemic-censorship-palestine-content-instagram-and – Details systemic censorship of Palestinian content on Meta platforms, reinforcing the study’s findings about AI moderation bias against pro-Palestine posts.
- https://ngo-monitor.org/reports/the-influence-of-ngos-on-meta-facebook/ – Discusses the Oversight Board’s 2021 recommendation for an independent review of Meta’s moderation biases, corroborating concerns about enforcement discrepancies highlighted in the study.
- https://www.accessnow.org/publication/how-meta-censors-palestinian-voices/ – Provides evidence of Meta’s systematic silencing of Palestinian voices, supporting the study’s claims about cultural and linguistic insensitivity in automated moderation.
- https://arxiv.org/html/2504.02175v1 – Analyzes moderation guidelines’ cultural biases, aligning with the study’s emphasis on Western-centric policies overlooking Middle Eastern linguistic nuances.
- https://www.hrw.org/report/2021/10/08/israel/palestine-facebook-censors-discussion-rights-issues – Duplicate reference. As no additional unique URLs are available, recommend substituting with academic sources (if later published) or direct statements from Dr. Walid Magdy once available.
- https://www.heraldscotland.com/news/25116126.study-says-facebook-removed-gaza-posts-not-break-rules/?ref=rss – Please view link – unable to able to access data
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
The narrative discusses a study focusing on Facebook content removals from May 2021, but the research is recent and forthcoming presentation at CHI 2025 confirms currency. There is no indication the content is recycled or outdated; rather, it reports on a new peer-reviewed study, increasing freshness.
Quotes check
Score:
8
Notes:
The direct quote from Dr Walid Magdy to The Herald (Glasgow) appears original with no earlier online records found, suggesting it is a first-hand statement given specifically for this narrative, which supports credibility.
Source reliability
Score:
7
Notes:
The narrative originates from The Herald Scotland, a known regional publication with a credible reputation but not a global news agency. The study cited is peer-reviewed and associated with reputable institutions, adding reliability to the content.
Plausability check
Score:
9
Notes:
The content regarding Facebook’s AI moderation issues and cultural bias is plausible and consistent with known challenges in global content moderation. The involvement of reputable universities and scheduled conference presentation supports plausibility; no contradictory information found.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is current, based on a recent and upcoming peer-reviewed study, featuring original quotes directly attributed to a lead researcher. The source is moderately reputable and the claims about AI moderation and cultural biases are plausible and well-supported by academic research. No evidence suggests old or recycled content.