Meta, the parent company of social media giants Facebook and Instagram, finds itself under the spotlight following controversial claims of restricting pro-Palestine speech. Human Rights Watch (HRW) released a report raising concerns about the repeated removal or restriction of content supporting Palestine, even when it allegedly did not violate the platform’s rules. The accusations have ignited widespread concerns about freedom of expression and content moderation on these prominent social media platforms.
The Human Rights Watch Report: Uncovering Allegations
HRW’s report accuses Meta of systematically censoring pro-Palestinian voices during the Israel-Gaza conflict. The report suggests that the social media platform, in enforcing its content moderation policies, has engaged in a pattern of undue removal and suppression of protected speech related to Palestine. The concerns span across various aspects, including the removal of peaceful pro-Palestine content and the company’s response to government takedown requests.
The report highlights instances where Meta allegedly removed over 1,000 pieces of pro-Palestine content during October and November 2023. Notably, HRW claims that these removals occurred even when the content did not violate their rules. Examples cited include posts with images of injured or deceased individuals in Gaza hospitals and comments expressing sentiments like “Free Palestine” and “Stop the Genocide.”
The report also mentions cases where seemingly innocuous content, such as a series of Palestinian flag emojis, triggered warnings of potential harm. To prevent such harm from spreading, do read Explainer: Misinformation, Internet Social Media, and More During Israel-Hamas Conflict to further understand the severity of misinformation during such times.
Meta’s Response and Defence
Meta responded to the HRW report, stating that the allegations do not reflect its efforts to protect speech related to the Israel-Hamas conflict. The company emphasised the challenges of enforcing policies globally during a highly polarised and intense conflict. While acknowledging errors in content moderation, the social media platform strongly denied intentionally suppressing a particular voice and labelled the claim of systemic censorship as misleading.

This recent report adds to the ongoing scrutiny that Meta and other social media companies have faced regarding their handling of content related to the Israel-Hamas conflict. Meta’s own Oversight Board, tasked with reviewing content moderation decisions, overturned the company’s initial decision to remove two videos related to the conflict.
The board argued that these videos provided important information about human suffering on both sides of the issue. This intervention sheds light on the delicate balance social media platforms must strike between removing harmful content and upholding free expression.
Challenges in Content Moderation
The conflict between Israel and Hamas has posed unique challenges for social media platforms, where content is reported at an increased rate. The HRW report criticises Meta’s heavy reliance on automation to moderate content, pointing to instances of pro-Palestine comments being automatically removed and marked as “spam”. This raises questions about the efficacy of automated systems in distinguishing between legitimate content and potential violations.
One of the challenges Meta faces is the contentious nature of the Israel-Palestine conflict, where interpretations of what constitutes harm vary widely. HRW’s report points to the removal of comments and posts containing the slogan “from the river to the sea, Palestine will be free” illustrating the complexities surrounding language that can be interpreted differently by different groups.
Also Read: Learn How to Safeguard Your Assets as Crypto Market Hit Amid Hamas-Israel Conflict
Policy Issues and Recommendations
The HRW report delves into Meta’s content moderation policies, particularly its Dangerous Organisations and Individuals policy. The inclusion of Hamas in this policy, based on the U.S. government’s designation, raises concerns for HRW, which suggests that they should rely on international human rights standards instead. The report calls for them to publish the full list of organisations covered by its dangerous organisations’ policy.

Meta acknowledges the need for policy adjustments and has committed to rolling out a revised version of the Dangerous Organisations and Individuals policy. The company aims to address concerns and refine its definition of “praise” of dangerous organisations. However, these updates are expected in the first half of the coming year, indicating an ongoing process of policy refinement.
Meta’s Historical Content Moderation Challenges
This is not the first time Meta has faced accusations of content moderation issues related to the Israel-Palestine conflict. HRW’s findings echo previous instances where Facebook was accused of censoring discussions about human rights issues in the region.
An independent investigation commissioned by themselves in 2021 found that the company’s content moderation policies adversely impacted the ability of Palestinians to share information about their experiences.
Despite commitments to address content moderation challenges, the HRW report suggests that Meta has not followed through on its promises. The company is accused of failing to meet its human rights responsibilities, leading to a replication of past patterns of abuse. The report calls for concrete steps toward transparency and remediation rather than relying on “tired apologies and empty promises”.
Elizabeth Warren’s Inquiry and Oversight Board’s Criticism
Senator Elizabeth Warren‘s recent inquiry into reports of content demotion and removal on Instagram adds another layer to Meta’s challenges. Users have reported instances of their content being demoted or removed, pointing to potential issues with the platform’s algorithms. Additionally, Meta’s Oversight Board’s recent criticism of content removal decisions reinforces the need for a robust and transparent content moderation framework.
Read More: ‘They’re Opportunistic and Adaptive’: How Hamas Is Using Cryptocurrency to Raise Funds
Navigating the Complex Misinformation Landscape
As Meta navigates the complex landscape of content moderation during a conflict, the scrutiny it faces raises broader questions about the role and responsibility of social media platforms. Balancing the preservation of free expression with the prevention of harm requires continuous refinement of policies and transparent decision-making processes.
Meta’s commitment to revising policies and addressing concerns is a step forward, but the company will likely continue to face challenges in striking the right balance in content moderation, especially in the context of sensitive geopolitical conflicts. To contribute to maintaining the safety of the information present on the Internet, users have to stay informed and stay vigilant when viewing content. You can stay well-informed by bookmarking our page, so you know how to combat misinformation in such chaotic times.
Author Profile

Latest entries
GAMING2024.06.12Top 4 Female Tekken 8 Fighters to Obliterate Your Opponents in Style!
NEWS2024.03.18Elon Musk’s SpaceX Ventures into National Security to Empower Spy Satellite Network for U.S.
GAMING2024.03.17PS Plus: 7 New Games for March and Beyond
GAMING2024.03.17Last Epoch Necromancer Builds: All You Need To Know About It