AI Use by Predators: What Schools Should Know and Do
The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.
Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.
Some key findings include:
- 91.4% of U.S. respondents are concerned about online predators using AI to groom students.
- Children are possessing, requesting and sharing nude content online at just 11-13 years old: 67.6% of U.S. respondents are seeing this behavior, with Snapchat being the top platform of choice.
- Parents are not educated/engaged: 70.6% of U.S. respondents said there is a lack of awareness amongst parents when it comes to AI and explicit content.
The report demonstrates that schools are worried about this issue but don’t yet have the resources to respond.
“There was a lot of concern, but not a lot of knowledge,” London says.
Here’s a closer look at the threats around AI grooming and other AI concerns, and what schools can do about it.
AI Grooming and Other Sex Crimes: Potential Dangers to Avoid
The same way scammers use AI to make their phishing schemes more efficient and effective, sexual predators online can use AI to harm children, London says. She adds that schools should be aware of how predators might use AI to target a victim, how they might use AI to gain the trust of a victim, how they might fill a need, and then how they might manipulate and isolate them.
According to the Qoria report, AI can help predators target students by analyzing data, recognizing patterns in their behavior, and then using AI to create fake and convincing personas. AI can also generate fake information about people close to the child to cause distrust and lead the child to become isolated, making them more vulnerable. Additionally, deepfakes can be used to blackmail children with the threat of releasing potentially embarrassing information.
In that same vein, deepfake technology can be used by students to harass other students and staff by creating fake explicit images of them.
These are just some of the nefarious ways AI technology can be used in a harmful and abusive manner. Another example is de-aging AI tools.
“So I could put in a picture of me. I’m 42 years old, and it will change my appearance to be a 10-year-old,” London says. “The use of AI to manipulate children’s images in a lot of different contexts is definitely a concern.”
Preventing Nefarious AI Use
London shares several steps schools can take to limit the threat to students from this type of AI use.
Using technology correctly. London says individual schools and districts should review their filters and monitoring systems to make sure they’re appropriate for modern contexts. “Some of them can be quite basic. Some can result in over-blocking, and that can be problematic,” she says. She adds schools want a filter that can pick up on contextual alerts. “Language that a young person might be using that might not necessarily [be] flag[ed] as being explicit, but once you have an understanding about how a predator might talk to a child that might be something that is picked up.”
Educating parents. Schools understand that students need to be educated on these topics but sometimes they forget about parents. “A really key finding from this report was that 70% of schools said that a lack of parental awareness around AI and explicit content and predators was a key barrier,” London says. “And what we also found is that only 44% of the schools or districts proactively engaged parents in educational nights or shared information. So that’s a really easy win.” She adds this education does not need to be onerous. “It could be sharing some regular communications around some risks and working with experts in the area such as local police.”
Sharing parental controls with parents. “A lot of schools and districts can share things like parental control tools as well. That will help parents not just manage their children’s time, but also the sort of content that they’re accessing,” London says. “When they use a parental control tool, they’re able to see where their child has been blocked from. That information is really great to start a conversation around”
Educating staff. Staff education was also identified in the Qoria report as a key need around this issue. London says there are low-budget, high-impact steps schools can take such as having a local university cybersecurity expert speak about the dangers of AI or forming working groups around these topics. She adds, “When staff has good knowledge and they have a strength-based approach to tools like AI that has a knock-on effect when it comes to things like help-seeking.”
Post Comment