To gain more followers, a 17-year-old girl in Agra’s Fatehpur Sikri filmed a ‘fake’ suicide attempt video and posted it on social media. The incident on Thursday triggered an alert from the AI monitoring system of Meta, following which the police reached her home.

According to police, the clip showed the teenager drinking a liquid from a bottle and later collapsing. However, the content of the clip was flagged as a potential suicide attempt by Meta’s AI system and an immediate alert was sent to the concerned authorities to take action.

Soon after, the social media cell alerted local police about the incident and the location of the girl was traced. Later on, they found out that it was a ‘staged’ video and the girl was fine. She consumed only water and not any poisonous substances. After admitting her mistake, the girl was counselled by the police and warned against posting misleading content on social media related to self-harm.

Meta AI and other safety technologies on Facebook and Instagram identify potential crimes and content related to self-harm through a combination of proactive artificial intelligence monitoring, machine learning as well as human review. After detecting imminent risk or illegal activity, these systems take multiple actions that range from displaying resource helplines to notifying law enforcement.

When a person expresses suicidal thoughts, it becomes critical to get help as quickly as possible. The suicide prevention resources, available on Facebook and Instagram, have been developed with leading mental health organisations by taking inputs from those with personal experience.

With the help of machine learning technology, Meta has expanded its ability to identify possible suicide or self-injury content. In several countries, it utilises this technology to get timely help for those in need.

The technology uses pattern-recognition signals like phrases and comments of concern for identifying possible distress.

“We use artificial intelligence to prioritise the order that our team reviews reported posts, videos and live streams. This ensures that we can efficiently enforce our policies and get resources to people quickly. It also lets our reviewers prioritise and evaluate urgent posts, contacting emergency services when members of our community might be at risk of harm. Speed is critical," according to Meta.

The content is then escalated to the community operations team, which decides whether it violates the policies or whether to recommend contacting local emergency responders.

Meta’s technology for identifying possible suicide and self-injury integrates into both Facebook and Instagram posts, along with Facebook and Instagram Live.

Source: Tech News in news18.com, Tech Latest News, Tech News