Instagram on Monday announced new features aimed at stopping online bullying on its platform, including a warning to people as they are preparing to post abusive remarks.
“It’s our responsibility to create a safe environment on Instagram,” said a statement from Adam Mosseri, head of the social platform.
One new tool being rolled out is a warning generated by AI to notify users their comment may be considered offensive before it is posted.
“This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification,” Mosseri said.
Another new tool is aimed at limiting the spread of abusive comments on a user’s feed. A feature called “restrict” that is being tested will make posts from an offending person visible only to that person.
“You can choose to make a restricted person’s comments visible to others by approving their comments,” Mosseri added.
“Restricted people won’t be able to see when you’re active on Instagram or when you’ve read their direct messages.”
The move by Instagram is the latest in a series of actions on cyberbullying by social networks to deal with hate speech and abusive conduct.