London: Anti-vaxxer organisations are employing carrot emojis to get around social media platforms' automated moderation technologies, which are designed to find news that doesn't adhere to their platform regulations, BBC reported on Friday.

The phrase "vaccine" was replaced with the carrot emoji in several Facebook groups, according to an investigation. Members were able to avoid the automatic content moderation tools on Facebook since its algorithm typically recognises words over emojis.

One Facebook group that employed this strategy reportedly had over 250,000 members, according to the article.

The groups, which were only accessible by invitation, had explicit rules that forbade members from ever using the terms. "Do not use the c word, v word or b word ever," referring to "COVID," "vaccine" and "booster", and advised them to "use code words for everything."

The study revealed that organisations utilising the carrot emoji were spreading unproven claims that vaccines cause harm or even death, Arab News reported.

After being invited to join one of the groups, disinformation researcher Marc Owen Jones from Hamad bin Khalifa University in Qatar spotted the trend and posted about it on Twitter.

"It was people giving accounts of relatives who had died shortly after having the COVID-19 vaccine", he said. "But instead of using the words 'COVID-19' or 'vaccine,' they were using emojis of carrots.

"Initially I was a little confused. And then it clicked — that it was being used as a way of evading, or apparently evading, Facebook's fake news detection algorithms."

The groups were removed after the BBC informed Meta of the results, though some of them soon emerged again.

"We have removed this group for violating our harmful misinformation policies and will review any other similar content in line with this policy. We continue to work closely with public health experts and the UK government to further tackle COVID vaccine misinformation," Meta said in a statement.

In the past two years, Meta and other social media platforms have been under increasing scrutiny for neglecting to delete false information about COVID-19 and vaccines.

Since the beginning of the pandemic, Facebook claimed to have removed more than 20 million pieces of content providing inaccurate information on COVID-19 or vaccines.

Since AI is focused on text and words, emojis are harder for algorithms to recognise, which may account for how these organisations managed to go unreported for so long.

A group of researchers at Seattle University developed a programme called HatemojiCheck, a test suite that highlights flaws in current hate detection models and identifies hateful language communicated via emojis, as emoji-based hatred offers an increasing problem for automated detection.


Tags: