Suicide attempts even among 6-year-olds. AI is trying to help [INTERVIEW]
Michał Wroczyński i Patrycja Tempska, fot. materiały prasoweSamurai Labs is a deep tech company with headquarters in San Francisco, Northern California, and a lab in Gdynia, Poland, whose mission is to counteract cyberviolence. The team has created a Cyber Guardian that protects children and online communities from abuse.
The Cyber Guardian ensures safety in games and online communities (Discord, online forums, remote education, and eSports), where it detects and blocks the evidently aggressive and harmful content, while educating users, supporting positive communication and de-escalating conflicts.
The interdisciplinary team of Samurai Labs, in partnership with the education and support platform Życie Warte Jest Rozmowy (Life is Worth a Conversation), is working on the use of neuro-symbolic algorithms to analyze hundreds of millions of online conversations and provide effective help to people in suicide crisis.
In our conversation with Michał Wroczyński, CEO of Samurai Labs, and Patrycja Tempska, Impact Co-founder at Samurai Labs, we are wondering how Artificial Intelligence (AI) can protect not only the youngest.
Are we going to talk about suicides among the youngest?
Michał Wroczyński: It is still a taboo subject. Most of us can imagine that our children may be exposed to a pedophile or be cyberbullied, but we can’t even think that they could attempt suicide. People are slowly becoming more aware of the problem, but there is still a lot to do.
Patrycja Tempska: More and more children and young people are attempting suicide. The number of suicidal behaviors among teenagers in Poland increased by 77 percent in 2021. In September 2022, it was already as high as in the entire previous year. Unfortunately, this problem – and its scale – are an increasing challenge given that suicidal attempts are reported even in young children.
Children?
P: Yes. There have been cases of even six-year-olds and seven-year-olds who had suicidal thoughts and tried to take their own lives.
How does a six-year-old know how to attempt suicide?
P: On the one hand, cyberspace is a place where you can express yourself, join various communities and build valuable relationships or pursue your interests, but on the other, this is also where children are particularly exposed to dangerous content.
Information about the methods can be easily found online. It is common for users to post such data in discussions on social networking sites. When sharing their stories, they often describe the methods they used. Unfortunately, the side effect is that their story can inspire other people in crisis to follow suit.
Why is it so easy for young people to access information on how to attempt suicide, but they can’t find specialist help online?
M: Unfortunately, there are special forums and groups available on popular online portals, where users can read about different self-harm methods and get inspired. Finding them is not difficult. They provide information on how to injure yourself without your parents realizing it. Samurai Labs cooperates with Dr. Halszka Witkowska, an expert in suicidology and the initiator of the Życie Warte Jest Rozmowy platform, who talks about enormous, if not ubiquitous, loneliness. For various reasons, parents aren’t sufficiently present in the lives of their children and often don’t understand their problems. We don’t talk, pay attention to each other or spend time together because we are busy looking at the screen. Suicide is a consequence of loneliness. People attempt it because of tremendous suffering. They don’t know whom to turn to for help and they believe they have no one to live for. Poorer relationships, the Covid-19 pandemic and isolation from school life have all contributed to higher numbers of suicide attempts among children and young people. Suicidal thoughts have been reported by as many as 20 percent of adolescents in the United States, while 9 percent have tried to attempt suicide at least once.
Are such self-harm forums legal?
M: No regulations exist that would address this issue. A giant step has been made by the United Kingdom, where the parliament has already published an important document announcing maximum fines for websites and games that don’t protect users from content related to self-harm or unable to help users at risk of suicide.
How high are the fines?
M: Up to 10% of the company’s global annual turnover. This marks an impending earthquake because the bill will apply to everyone: social media, mobile games and online forums. These are extremely important actions because the number of suicide attempts among children is always the most alarming indicator. Things must be really bad if the youngest members of society are willing to take such a dramatic step and end their lives. Young people exposed to cyberbullying are also at higher risk of suicidal thoughts and attempts. A person experiencing cyberbullying is more than twice as likely to have suicidal thoughts and behavior as a non-bullied person. Therefore, we need to look at the general context and possible causes in all their complexity.
Why have online aggression and hate-speech become so common among the youngest? Some stories told by teachers are truly terrifying.
P: As social beings, we are susceptible to how others behave and what they do, and we tend to imitate them. In terms of suicidal behavior, we can speak of the Werther Effect, where the reports of deaths by suicide of celebrities and other public figures correlate with a rise in suicide attempts. On a positive note, the more we talk in public about available help options, stories of hope and recovery from crisis, preventive actions and mental health prophylaxis, the more people turn to helplines for support.
It is similar with hate speech and aggression online. In many cases, cyberspace is not moderated on the scale it should be. When confronted on a daily basis with personal attacks, profanity, threats or blackmail, all of which are common types of behavior, for example, in video games, kids learn to assimilate these patterns and recognize them as acceptable because of their prevalence. Another pressing issue is the increasingly common radicalization and extremism, to which young people are particularly vulnerable. Professor Michał Bilewicz’s study shows that young people whose only knowledge of the world comes from the internet no longer recognize hate speech as offensive to minorities.
M: Internet violence has been long present, but it has just reached the critical point where the currently available tools to combat it are no longer sufficient. What does it typically look like today? If you want to report inappropriate behavior, for example, in a game, you have to follow a specific procedure. Do you know how many internet users report harmful content? Various reports show that definitely less than half of them do so.
Because reporting it simply doesn’t work. I have tried it on Facebook many times and failed in 99% of the cases.
M: We have recently tried to report a user fantasizing on a large social media platform about a mass shooting at school. I received a reply from the platform that they would not respond because the comment didn’t violate their network standards. Websites use AI to detect such threats, but there is always a person at the end of each process. The moderator often has to deal with hundreds of cases per hour and is simply overwhelmed. As Samurai Labs, we want to break this paradigm – we strongly believe that prevention is better than cure.
What signs should alert me that my child may be experiencing aggression online?
P: I believe that to some extent you have to dive into cyberspaces where your son or daughter is active to gain some insight into the content that your child may be exposed to. We should be sensitive to any change in behavior – when the child no longer enjoys their hobbies, becomes withdrawn and experiences physical symptoms such as stomachache, headache or problems with sleep. This is the red warning light. Given the prevalence of smartphones, children can potentially experience cyberbullying 24 hours a day. Even home is no longer a safe haven like in the case of schoolyard bullying.
Checking on an eight-year-old or nine-year-old is relatively simple. However, scrolling through the social media of a rebellious teenager can just as well have the opposite effect.
P: In a relationship with a child or a teenager, talking and paying attention to their needs and problems is critical. The signs I mentioned before may be symptoms of many other problems, not necessarily related to the internet.
I would recommend regularly trying to address the issue and ask proactive questions about whether the child has experienced anything disturbing. Avoid lecturing. Young people usually know more about cyberbullying than older generations. It may be a good idea to use related pop culture motifs because this is how you can get through to teenagers.
M: In my opinion, it is not at all about control because you don’t really have it. Building a relationship with the child is about showing them to come to you for advice and help whenever something upsets them. Obviously, this is very difficult. It requires understanding that permanent control is not the solution – in fact, it is quite the opposite. There is a country where the percentage of those who are cyberbullied is decreasing – Finland.
Why Finland?
M: It promotes education and cyberbullying prevention programs. The Finnish education system is decentralized, with schools working for local governments. More and more governments around the world are taking matters into their own hands and implementing laws to best protect the youngest.
Partnership is very important, but how can you convince the parents? We often blame all the evil of this world on technology: “My child has problems because she spends too much time with the smartphone.” When talking about the use of AI for protecting children, you are trying to convince parents that, paradoxically, it is worth pushing children even deeper into the world of modern technology.
M: We always highlight the importance of the child-parent relationship. However, the problem is so extensive and complex that its solution requires everyone to play a part – parents, schools, government institutions with wise prevention programs, and technology whenever something alarming happens that we want to know about it in real time.
We step in when things go wrong online. Some people argue that social media fill the void emerging from inadequate social relationships. The significance of social media is a fact that we have to accept. The challenge is how to respond to this state of affairs.
AI is never wrong? The word “stupid” isn’t always hate speech.
M: The modern second-wave AI systems are learning based on the annotated data – hundreds of thousands or even millions of examples of a given behavior. The problem is that the amount of data on pedophiles or suicidal behaviors is not large enough. At the moment, the AI based on statistical learning cannot be fully trained and keeps raising false alarms. At Samurai Labs, we combine statistical learning with reasoning based on knowledge (integration of statistical and symbolic methods). Given the scarcity of data, we rely on the experience of suicidologists, psychologists, physicians, and parents who inform us about certain mechanisms. Using this knowledge and knowing the context of a discussion, we are able to very precisely detect cyberbullying and suicidal intentions.
Let’s say you have detected a potential risk. What do you do next?
P: In the case of cyberbullying and verbal aggression, our priority is education. Through interventions referring to norms and empathy, we try to positively influence interactions in a given online space, reminding users about respect and non-violent communication.
In the case of suicidal intentions, together with suicidologist Dr. Halszka Witkowska and the team of the education and support platform Życie Warte Jest Rozmowy she created, we are in the process of developing intervention procedures to provide people in crisis with the most relevant and supportive materials and direct contact with specialists who can help a person out of the acute phase of the crisis.
While what you do is very noble, I can’t help but wonder whether it is not “art for art’s sake”? Until you develop proper response procedures, not much can come out of it.
P: Our models detect approximately 80–120 people every day who share their suicidal thoughts and plans on one of the most known English-language discussion forums. Faced daily with such an intensity of human tragedies, we all feel the urgent need to help.
Regarding verbal aggression, we have already conducted studies that have shown us certain ways in how to respond effectively. Helping people in online crisis is still an understudied area. Scientific publications and research programs on how to provide effective and scalable support online are scarce. Given the sensitivity of the issue, we must proceed carefully, anticipating the consequences of various interventions and the wording used in the process.
M: This is where AI helps us not only reach people who are most in need but also protect those who provide help and moderate content on the web.
What do you mean?
M: Work related to content moderation is extremely demanding mentally. Moderators are at risk of post-traumatic stress disorder, depression, and anxiety. Spending entire days immersed in what frequently proves to be drastic content is not a job for people. AI must support us in this area. We can see it in our team how important it is to offer care and support to those of us who analyze streams of disturbing content on a daily basis.
What was the most distressing thing you have read?
M: Stories shared by people are the hardest. The terrible suffering and violence they have experienced.
P: I have recently come across a heartbreaking story of a boy who lost someone he loved to suicide. It happened when they ceased contact at the boy’s request because of the frequent and fierce conflicts in their relationship. The trauma and suicidal thoughts he experienced after his friend’s death were unbearable.
Confessions of those fantasizing about becoming a serial killer or a mass murderer can be just as intense and disturbing. Such stories can also be found online, and social networking sites, as Michał mentioned, frequently don’t react.
Can we ever get to the point where the internet will be safe? Will we be always just putting out fires?
M: We are certainly capable of achieving a greater balance provided that we never stop fighting for it and educating. Violence and radicalization must be prevented. However, it is like with antivirus programs – new threats will continue emerging for which the previous solutions will no longer be effective. The internet with moderation conducted ex post facto, after the harm has been done, particularly in terms of threats to children and public figures, must become a thing of the past.
---
Do you need help? Are you in a crisis? For those in the US, the National Suicide Prevention Lifeline has people available to help 24/7 by dialing 9-8-8. You can also contact the professional via text message using crisistextline.org. To find helplines in other countries go to https://findahelpline.com/.
Polska wersja wywiadu jest dostępna TUTAJ.