Now on Twitter will warn users if they expect an offensive reply
text_fieldsIf you anticipate any hate content reply while responding to a post containing harmful or offensive language by an unknown person on Twitter, then the microblogging site says users do not need to worry about it any longer as it would warn users if such kind of language is lurking in the reply.
Twitter has rolled out a feature that would warn users about a potentially harmful or offensive reply from an unknown user, which in turn is believed, could prevent hostility on the social platform.
In other words, the upgraded feature is better at spotting "strong language" and now takes into account your relationship with the person you're messaging.
"For example, if two accounts follow and reply to each other often, there's a higher likelihood that they have a better understanding of the preferred tone of communication," Anita Butler and Alberto Parrella from Twitter said in a joint statement on Wednesday.
In 2020, Twitter first tested prompts that encouraged people to pause and reconsider a potentially harmful or offensive reply before they hit send.
"Starting today, we're rolling these improved prompts out across iOS and Android, starting with accounts that have enabled English-language settings," the company informed.
Early tests revealed that if prompted, 34 per cent of people revised their initial reply or decided to not send their reply at all.
"After being prompted once, people composed on average 11 per cent fewer offensive replies" thereafter, Twitter said.
If prompted, people were less likely to receive offensive and harmful replies back.
Since the early tests, here's what we've incorporated into the systems that decide when and how to send these reminders:
Twitter said it will continue to explore how prompts -- such as reply prompts and article prompts -- and other forms of intervention can encourage healthier conversations on Twitter.