India

Meta approved political ads in India spreading false info, inciting violence during election: Report

Mark Zuckerberg and Narendra Modi.

Meta, the company that owns Facebook and Instagram, approved several AI-generated political ads during India’s election, which spread disinformation and incited religious violence, according to an exclusive report by the Guardian.

As per the international media report, these approved ads contained inflammatory language and slurs targeting Muslims in India, including phrases like “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned.”

They also included Hindu supremacist rhetoric and false information about political leaders. One ad falsely claimed an opposition leader wanted to “erase Hindus from India” and depicted this leader alongside a Pakistan flag.

India Civil Watch International (ICWI) and the corporate accountability organisation Ekō created and submitted these ads to Meta’s ad library to test the company’s ability to detect and block harmful political content during India’s lengthy election period.

The report highlighted that all the ads were based on real hate speech and disinformation prevalent in India, demonstrating how social media platforms can amplify harmful narratives. These ads were submitted midway through the election, which spanned from April to June 1, determining if Prime Minister of India  Narendra Modi and his Bharatiya Janata Party (BJP) would remain in power.

Modi’s government has been criticised for promoting a Hindu-first agenda, leading to increased persecution of India’s Muslim minority. During this election, the BJP was accused of using anti-Muslim rhetoric to garner votes from Hindus, who constitute 80% of the population, the report claimed.

For instance, during a rally in Rajasthan, Modi referred to Muslims as “infiltrators” who “have more children,” a statement he later claimed was not directed at Muslims and noted he had “many Muslim friends.”

A BJP campaign video on the social media site X was recently ordered to be removed for demonising Muslims.

Researchers submitted 22 ads in various languages, including English, Hindi, Bengali, Gujarati, and Kannada, to Meta. Of these, 14 were approved, with an additional three being approved after minor modifications that did not change the provocative nature of the content. These ads were removed by the researchers before being published.

Despite Meta’s public commitment to preventing the spread of AI-generated or manipulated content during the Indian election, its systems failed to detect that all the approved ads featured AI-manipulated images. Five ads were rejected for violating Meta’s policies on hate speech and violence, but the 14 approved ads also violated Meta’s policies on hate speech, bullying, harassment, misinformation, and incitement.

Maen Hammad, a campaigner at Ekō, criticised Meta for profiting from hate speech, stating that supremacists and racists could use targeted ads to spread violence and Meta would accept their money without question.

Meta did not recognize the 14 approved ads as political or election-related, even though many targeted political parties and candidates opposing the BJP. According to Meta’s policies, political ads require a specific authorisation process, but only three submissions were rejected for not following this procedure. Consequently, these ads could violate India’s election rules, which ban political advertising 48 hours before and during voting. These ads were uploaded to coincide with election voting phases.

In response, a Meta spokesperson emphasised that people running election-related ads must undergo the required authorisation process and comply with applicable laws. Meta stated that it removes content that violates community standards, including AI-generated content, which can be reviewed and rated by independent fact-checkers. Advertisers must also disclose when AI is used to create or alter political or social issue ads.

Click to comment
To Top