I’m sure most of us have heard about AI (Artificial Intelligence) in recent years and the positives and negatives this can bring to the classroom. In recent weeks I have been in many schools and meetings where people are asking, ‘has anyone had any cases of children using AI or being victimised by AI’ and unfortunately the answer has been yes.
The NSPCC has released research alongside AWO (law and technology consultants) analysing Gen AI (Generative AI) and its impact on young people. Gen AI is a form of AI that generates new content such as images, text and videos which can then be used to bully, harass, groom, extort or mislead children.
Although Gen AI in schools can bring many positive opportunities, including adaptable programmes for children with different learning styles and new ways of interacting or accessing teaching material online, it is fair to say that it is also proving to be opening up a world of advanced and tumultuous risk for children, exposing them to complexities we may not be ready to manage.
As early as 2019, Childline shared that they had started to see an increase in children calling the helpline because they had been victims of Gen AI. In 2023/2024 alone Childline provided over 900 counselling sessions to children relating to blackmail or threats to share sexual images on the internet.
Some of the AI sites children are using are below:
- ChatGPT
An online chatbot using natural language processing to engage in human-like conversation or generate text-based information such as articles, social media posts, essays, emails and so on.
- Character.AI
A chatbot service where you can create characters including their personalities. This platform allows users to have human-like conversations with other characters, you can choose to have your character public so anyone else using the platform can interact with your character.
- Stable Diffusion
This is a platform that generates images using a text prompt, for example if you were to type in a character with physical and personality traits it will generate an image based of that information. Not only does this platform create hyper realistic images it can produce videos and animations that are made to look real.
What are the risks of Gen AI in schools?
The research conducted demonstrates that there are seven safety risks to children:
- Sexual grooming
- Sexual harassment
- Bullying
- Sexual extortion
- Child sexual abuse imagery
- Harmful content
- Harmful advertisement and/or recommendations
Children are particularly vulnerable to being victims of AI generated images or videos either through a picture that is manipulated or software that face swaps. The images and videos can look incredibly realistic and the threat of them being shared is enough to cause significant distress to a child.
In 2023, The Internet Watch Foundation (IWF) published findings showing over 20,000 AI generated sexual abuse images were on the dark web, something they claim has increased exponentially since. Of the images that were reviewed over 90% of them were shown to be so realistic they could be assessed under the same laws as real child sexual imagery (IWF, 2024).
‘’Ofcom tracks children’s online and media usage, and have found that 59% of 7–17-year-old and 79% of 13–17-year-old internet users in the UK have used a generative AI tool in the last year. Snapchat’s My AI was the most used platform (51%), and there was no difference by gender in the number of children using these tools.’’
Children’s Commissioner 2024
It is not yet clear how monumental the impact may be on young people, but charities and organisations are calling for greater protection in law and for the technology giants to take accountability. However, we are already beginning to see – or are likely to start feeling – the impact of Gen AI in schools, so how do we start safeguarding our pupils?
Steps to safeguard pupils from Gen AI
- Robust online filtering and monitoring systems that are regularly updated to ensure they are not letting through new AI technology. In addition to a school monitoring systems children also need to be supervised during lessons and making sure pupils are sticking to age restricted content, most AI websites are 18+.
- Remind staff about their own online presence, which should be covered in your school’s code of conduct. Staff are not immune to the risks of Gen AI; children can take staff images that may be found online and manipulate them to circulate across social media.
- Creating awareness across the school community – pupils, parents, and staff and allowing open and honest conversations about AI, the benefits and the risks it poses.
- Updating your school policies to include preventative and responsive measures to AI specifically including generated sexual abuse imagery online and image-based threats or extortion.
- Review your schools consent forms making sure they explicitly state how and where images will be used of students and for how long. Having an easily accessible process for parents, students, and previous students to request images are removed or deleted.
- In the event of an incident accurate recording will be vital. This includes usernames, times, dates, emails, messages and in-depth descriptions of any images or videos based on the information you have had available to you.
- Training and awareness with all staff on how to respond to incidents of possible Gen AI. The Designated Safeguarding Lead will need to be informed immediately if it is known or suspected a child has produced or been victim to Gen AI as they will be best placed to know the next steps including whether the police need to be called, the images reported to the Internet Watch Foundation and/or a referral to children’s social care.
Whilst we cannot be sure how quickly the use of AI will embed itself into young people’s lives, we cannot deny that it is here to stay. Schools are facing a changing landscape in not only safeguarding children and educating them on safe online use but now also the risk generated images and videos can pose.
It is important to remember that this is a new concept that we are all still learning how to navigate. There is a lot of ongoing research in order to help us understand AI and how to protect children, we cannot be the experts.
Get in touch with our Safeguarding Team for further advice and information.
Please also remember there is support available:
https://www.gov.uk/government/organisations/uk-council-for-internet-safety
https://www.ceop.police.uk/Safety-Centre
Please complete the form below and we will get in contact as soon as we can to help you with your query.