Become a partner in our mission of building a world where every child is protected and can thrive.
MAKE A DONATION
Get In Touch

What does generative AI mean for CSE?

June 27, 2023

It seems that we can’t turn on any device, go to any platform or view any news feed without seeing a mention of AI, specifically generative AI. Whether it’s professionals worried about the implications that recently released apps like ChatGPT might have on the security of their jobs, or evangelists espousing the great benefits that companies will reap from this time-saving productivity machine, the debates continue fiercely.

But one area that has far more potential to be impacted, and with much greater and devastating consequences, is the sexual abuse and exploitation of children. For those working in the child sexual exploitation (CSE) response community, the discussion also swings between the concepts of fear about potential negative outcomes, and hope that we could be looking at a powerful tool for those tracking down perpetrators.

There is no question that this technology, which is now easily and often freely available, could be used to create child sexual abuse material (CSAM). The concept of computer generated CSAM is certainly not new. Children have already been subjected to image-based abuse using deep fake technology, as we saw in our May Monthly Brown Bag presentation. But generative AI technology, now available to anyone, is taking this abuse to new level. And it has many in the sector worried, especially by the speed with which the technology has been made available with seemingly no consideration given to what guardrails we might need to put in place.

But we can learn from the past and ensure that we install proper protections for children before it’s too late. The situation with AI has been compared to the introduction and rapid expansion of social media platforms, which were left to their own devices to write the rule book.

Whilst it appears that AI companies like OpenAI have already put measures in place to prevent the creation of CSAM using their technology, open-source platforms have been much slower to consider such protections, if they’ve even included them at all. And it’s these platforms that are enabling the product of Al-generated CSAM. These open-source platforms, many based on a model created by Meta called LLaMA, have strong support from those who claim that they will be the vehicle for accelerated innovation. But they also facilitate easy production of abuse material, using images of real children as source material.

The harms and impact of AI-generated CSAM are significant. The image-based abuse of a child whose picture is has been used to create CSAM, the revictimisation of children by producing additional and progressively violent computer-generated versions of their abuse, or the potential for the material triggering an interest in CSAM that escalates to contact offending, are all potential issues. The images might be ‘fake’, but the dangers and abuse are real.

Researchers from Thorn and the Stanford Internet Observatory have released a research paper this week that examined the potential implications of photorealistic CSAM produced by generative Al. They found that less than 1 percent of CSAM found in known predatory communities was photorealistic Al-generated images. But in a New York Times article, David Thiel, Chief Technologist at the Stanford Internet Observatory and co-author of the report said that “within a year, we’re going to be reaching very much a problem state in this area.”

Regulators around the world have acknowledged the potential for harm created by this technology and recognise that we need strong and cohesive regulation worldwide. Several regions, including Europe, the US, Canada and Australia, which is currently conducting community consultation into Al regulation, are at various stages of introducing legislation to regulate AI.

And just as in the physical world, where our law enforcers are subject to both administering and being administered by the law, Al regulation will incorporate appropriate protections for citizens as investigators harness the powerful potential of Al tools to help them detect, report and prosecute AI-generated CSAM.

If you’re interested in hearing more about AI and its implications for child sexual abuse, Colm Gannon will expand on these concepts at our June Monthly Brown Bag event. Make sure you register today.

Subscribe to the ICMEC Australia newsletter

Stay up to date with the latest news, information and activities.

ICMEC Australia acknowledges Traditional Owners throughout Australia and their continuing connection to lands, waters and communities. We pay our respects to Aboriginal and Torres Strait Islanders, and Elders past and present.

Copyright © 2024. All Rights Reserved. |  Logo by Blisttech. Design by Insil.
magnifiercrosschevron-downtext-align-justify