SAN JOSE, Calif. — Instagram wants you to think about what you post the next time you put something up on the social-media site. And you might just get a warning about what you say in that next post.
As part of an ongoing effort to cut down on hate speech, threats and bullying online, Instagram is implementing new artificial intelligence tools that will read the words in a post, determine if the language might be hurtful or offensive, and then suggest if the person doing the person doing the posting wants to reconsider what they are about to put online.
“These tools are grounded in a deep understanding of how people bully each other and how they respond to bullying on Instagram,” said Instagram head Adam Mosseri, in a blog post announcing the new features last week.
Facebook-owned Instagram has more than 1 billion active monthly users, and hate-speech and bullying online have become a growing issue for the site as it has grown and become almost as popular and influential as its parent company.
When a person starts to write something that could be considered mean-spirited, he or she will get a message saying “Are you sure you want to post this?” The page also will include links to undo the post, and to get more information about why the post’s language might be seen as hurtful to the reader.
Instagram also said it will begin testing a new feature called Restrict, that allows a person to block other users and hide those users’ comments without them being notified.