t>

Instagram now warns parents if their teen is trying to kill or harm themselves


Instagram will start warning parents if their teen repeatedly tries to search for words related to suicide or self-harm in a short period of time, the company said. he announced Thursday. These notifications are rolling out in the coming weeks to parents who have signed up for parental controls on Instagram.

The social network that owns Meta says that while it already bans users from searching for suicide and self-harm, the new notifications are designed to make sure that parents know if their child is repeatedly trying to search for these in order to help their teen.

Searches that may trigger an alert include words that encourage suicide or self-harm, words that indicate a youth may be at risk of self-harm, and words such as “suicide” or “self-harm.”

Instagram says parents will receive an alert via email, text message, or WhatsApp, depending on their input, as well as in-app notifications. This information will include resources for parents to communicate with their adolescent.

Image credit:Instagram

The move comes as Meta and other major tech companies face challenges several cases They are looking to hold social media giants accountable for harming young people.

During the testimony of the trial that took place in the US District Court for the Northern District of California this week, the CEO of Instagram Adam Mosseri was. examinees and prosecutors in the ongoing story of the social network’s popularity over the app’s delay in rolling out important security features, including nudity filters for private messages for teenagers.

In addition, during testimony in a separate case before the Los Angeles County Superior Court, it was revealed that an internal investigation at Meta found that the supervision and control of the parents. it had little effect on forcing children using social media. The study also found that children who experience difficulties in life have a harder time managing their social media use.

Given the ongoing lawsuits accusing the company of failing to protect young people on its platforms, the timing of the new information isn’t exactly surprising.

The company feels that it wants to avoid posting this information unnecessarily, as excessive use may reduce their overall effectiveness.

“In an effort to meet these requirements, we analyzed Instagram’s search performance and consulted the experts of our Suicide and Self-Harm Team,” Instagram explained in a post. “We chose the limits that require the least amount of research within a short period of time, while still making mistakes. Although this means that sometimes we can inform parents when there is no real reason for concern, we feel – and experts agree – that this is the right starting point, and we will continue to monitor and listen to feedback to make sure that we are in the right place.”

The notifications are rolling out to the US, UK, Australia, and Canada next week, and will be available in other regions later this year.

In the future, Instagram plans to introduce these notifications when a teenager tries to engage the app’s AI in conversations about suicide or self-harm.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *