International Centre for Missing and Exploited Children Australia Ltd

Blog

What does Generative AI mean for CSE?

June 27, 2023

It seems that we can’t turn on any device, go to any platform or view any news feed without seeing a mention of AI, specifically generative AI. Whether it’s professionals worried about the implications that recently released apps like ChatGPT might have on the security of their jobs, or evangelists espousing the great benefits that companies will reap from this time-saving productivity machine, the debates continue fiercely.

But one area that has far more potential to be impacted, and with much greater and devastating consequences, is the sexual abuse and exploitation of children. For those working in the child sexual exploitation (CSE) response community, the discussion also swings between the concepts of fear about potential negative outcomes, and hope that we could be looking at a powerful tool for those tracking down perpetrators.

There is no question that this technology, which is now easily and often freely available, could be used to create child sexual abuse material (CSAM). The concept of computer generated CSAM is certainly not new. Children have already been subjected to image-based abuse using deep fake technology, as we saw in our May Monthly Brown Bag presentation. But generative AI technology, now available to anyone, is taking this abuse to new level. And it has many in the sector worried, especially by the speed with which the technology has been made available with seemingly no consideration given to what guardrails we might need to put in place.

But we can learn from the past and ensure that we install proper protections for children before it’s too late. The situation with AI has been compared to the introduction and rapid expansion of social media platforms, which were left to their own devices to write the rule book.

Whilst it appears that AI companies like OpenAI have already put measures in place to prevent the creation of CSAM using their technology, open-source platforms have been much slower to consider such protections, if they’ve even included them at all. And it’s these platforms that are enabling the product of Al-generated CSAM. These open-source platforms, many based on a model created by Meta called LLaMA, have strong support from those who claim that they will be the vehicle for accelerated innovation. But they also facilitate easy production of abuse material, using images of real children as source material.

The harms and impact of AI-generated CSAM are significant. The image-based abuse of a child whose picture is has been used to create CSAM, the revictimisation of children by producing additional and progressively violent computer-generated versions of their abuse, or the potential for the material triggering an interest in CSAM that escalates to contact offending, are all potential issues. The images might be ‘fake’, but the dangers and abuse are real.

Researchers from Thorn and the Stanford Internet Observatory have released a research paper this week that examined the potential implications of photorealistic CSAM produced by generative Al. They found that less than 1 percent of CSAM found in known predatory communities was photorealistic Al-generated images. But in a New York Times article, David Thiel, Chief Technologist at the Stanford Internet Observatory and co-author of the report said that “within a year, we’re going to be reaching very much a problem state in this area.”

Regulators around the world have acknowledged the potential for harm created by this technology and recognise that we need strong and cohesive regulation worldwide. Several regions, including Europe, the US, Canada and Australia, which is currently conducting community consultation into Al regulation, are at various stages of introducing legislation to regulate AI.

And just as in the physical world, where our law enforcers are subject to both administering and being administered by the law, Al regulation will incorporate appropriate protections for citizens as investigators harness the powerful potential of Al tools to help them detect, report and prosecute AI-generated CSAM.

If you’re interested in hearing more about AI and its implications for child sexual abuse, Colm Gannon will expand on these concepts at our June Monthly Brown Bag event. Make sure you register today.

Latest Posts

April 3, 2024

Supporting innovative data-driven initiatives through our Child Protection Fund

ICMEC Australia’s Child Protection Fund (CPF) provides support to data and technology-driven approaches that reduce and prevent sexual exploitation of children across several areas, including data acquisition, technological solutions and […]

Read more »
December 18, 2023

Embracing a Year of Change Together

Approaching the end of the year is often a time of furious activity combined with moments of reflection on the successes and challenges throughout the previous twelve months. As a […]

Read more »
December 8, 2023

ICMEC Australia’s CEO, Anna Bowden, recognised as Financial Crime Fighter of the Year

At ICMEC Australia, an unwavering commitment to protecting children from abuse and exploitation is at the forefront of our mission. Recognising the severity of this heinous crime, we firmly believe […]

Read more »

Connect with us

Stay up to date with the latest news, information and activities in the CSE response ecosystem. And be the first to find out about our latest events, training and news.

Subscribe