Join our mission in the fight against CSEA
Support us
Get In Touch

Not if, but how: Australia's AI child safety moment

April 29, 2026

Earlier this month, Anthropic chief executive Dario Amodei stood before some of Australia's leading policymakers and said, plainly: “The fundamental challenge remains – we know much less than we would like to, but the technology is moving faster than we’d like it. So we have to act, but we're not sure how to act.” 

Eight days later, OpenAI released a policy blueprint on protecting children in the age of generative AI – a detailed framework co-developed with the National Center for Missing and Exploited Children (NCMEC) and US state attorneys general, calling for updated laws, better reporting standards and safety-by-design controls built into AI platforms from the ground up.  

Then this week, Microsoft CEO Satya Nadella arrived in Australia to announce the company's largest ever investment in Australia – A$25 billion by 2029 – and signed a memorandum of understanding with the Albanese Government. 

Three of the world’s most powerful AI players in a mere month, explicitly told governments and industry to act. 

OpenAI’s blueprint is a worthwhile read. Its core argument – that protecting children requires a layered, prevention-first approach, not a single technical fix – is right, and echoes longstanding calls from the child safety sector. So does its insistence that better reporting isn’t just about volume, but about quality: structured, actionable information that allows investigators to triage cases faster and identify children who are at immediate risk of harm. That last point matters more than most people realise. 

Much of Australia's public conversation about AI and child safety has centered on AI-generated child sexual abuse material (CSAM) – synthetic imagery that doesn't depict a real child. There is sometimes an implicit assumption that this is therefore a lesser harm; a content problem, largely detached from real-world abuse. The evidence is unambiguous that this is wrong. 

AI-generated abuse material is being produced using existing images of real survivors, embedding their trauma into synthetic content and re-victimising them without their knowledge. Offenders are now using AI to create deepfakes of specific children from as few as 20 images. And increasingly, a disturbing legal tactic has emerged – what researchers call the ‘liars’ dividend’ – where offenders claim genuine evidence of contact abuse was AI-generated, exploiting public awareness of synthetic media to create plausible deniability.  

The harm is not abstract. Every piece of AI-generated material represents a real victim who deserves identification and justice – but the volume is now outpacing the capacity of those tasked with responding. Specialist investigators are being overwhelmed. The question is no longer whether this is a crisis; it is whether our response is equal to it. 

Australia is not ignorant to these complexities. ICMEC Australia hosted two National Roundtables on Child Safety in the Age of AI at Parliament House in July and September last year, driving a shift from concern to action. Independent Member for Curtin, Kate Chaney MP, in collaboration with ICMEC Australia, introduced a private member’s bill to criminalise AI tools built specifically to generate CSAM. The eSafety Commissioner has issued legal notices to AI companion chatbot providers. The Minister for Communications has announced an intention to ban ‘nudify’ apps. Across the research, advocacy and industry sectors, coalitions are forming – among them the SaferAI for Children Coalition, which unites more than 25 organisations around the shared goal of ensuring that children are kept safe in the rapidly developing digital space. 

The commitment in this space is genuine and the expertise is deep – but meaningful steps do not amount to a coordinated strategy. Too many efforts run in parallel rather than compounding. Legislative proposals sit without action. Memoranda of understanding with individual AI companies, while welcome and significant, are not legally binding and reach only one player at a time in a rapidly expanding market. The OpenAI blueprint illustrates the gap: valuable, but one company, one document, one jurisdiction. A piece of the puzzle, not the whole picture. 

Australia's conversation about AI and child safety has been almost entirely one-sided – focused on stopping AI being used to harm children. That is very necessary, but we’re missing a key part of the conversation. AI is also the most powerful tool we have to find and help children who are being harmed right now. Machine learning can flag harmful content faster than any human investigator. AI-assisted detection tools can triage which cases involve real victims in active danger, directing scarce investigative resources where they are most urgently needed. 

Australia has the research capability, the cross-sector buy-in and the policy momentum to lead on this – not just to regulate AI as a threat, but to deploy it as a protector. What is needed now is a systematic national approach that connects the dots: legislation that keeps pace with the technology, platform obligations with real enforceability, and active investment in AI as a tool for early intervention and victim identification.  

The window to act has not yet closed, but it will not stay open indefinitely. 

About the author 

Mikaela (a/g Manager, Government Affairs and Public Policy) works across government relations, public policy, and partnerships at ICMEC Australia, engaging with parliamentarians and agencies to turn emerging risks into practical policy outcomes. She leads the SaferAI for Children Coalition and contributes to cross-sector discussions on AI-enabled harm, governance, and child protection. 

Subscribe to the ICMEC Australia newsletter

Stay up to date with the latest news, information and activities.

ICMEC Australia acknowledges Traditional Owners throughout Australia and their continuing connection to lands, waters and communities. We pay our respects to Aboriginal and Torres Strait Islanders, and Elders past and present.

Copyright © 2024. All Rights Reserved. |  Logo by Blisttech. Design by Insil.
magnifiercrosschevron-downtext-align-justify