Become a partner in our mission of building a world where every child is protected and can thrive.
MAKE A DONATION
Get In Touch

Grok, guardrails, and the cost of 'move fast' AI

January 16, 2026

What we are witnessing with Grok is not a technical failure. It is a platform governance failure. 


In recent weeks, Grok (An AI chatbot embedded within the platform ‘X’) has been used to generate non-consensual sexualised imagery, including ‘nudified’ images of women and, in some reported cases, children. The response so far has largely been reactive: tweaks to safeguards, geoblocking in certain jurisdictions, and statements that shift responsibility back onto users once the damage is already done. 
 
From an Australian perspective, our eSafety Commissioner made it clear this week that this is unacceptable. The Commissioner confirmed a rise in reports relating to Grok producing sexualised and exploitative imagery and has formally engaged X to explain what safeguards are in place – and why they were clearly insufficient. With new mandatory online safety codes commencing in March 2026, the message could not be clearer: generative AI systems must be expected to anticipate misuse, not merely respond after harm occurs. 
 
Australia is not alone in this conclusion. Regulators across the EU and UK have also moved quickly, launching investigations, issuing data-retention orders, and, in some cases, referring matters to prosecutors under the Digital Services Act and national criminal law. Indonesia and Malaysia imposed temporary bans. Countries including France, Germany, Italy, Sweden, India, and the United Kingdom are all examining whether Grok’s design and deployment breach existing safety obligations. 
 
What is striking is how familiar this pattern is. We have seen it before with social media platforms: rapid deployment, minimal guardrails, externalised harm, followed by accusations of ‘censorship’ when regulators step in. But generative AI raises the stakes significantly. When a system can automatically manipulate real images of real people at scale – particularly women and children – the harm is not hypothetical. It is immediate, personal, and enduring. 

At ICMEC Australia, we see the real-world consequences of safeguarding and governance failures every day. Our work with law enforcement, regulators, and frontline organisations show us time and time again that harms created by unsafe technology design do not remain online – they follow children into their schools, homes, and communities, for life. Innovation does not have to come at this cost. Safety and technological progress are not mutually exclusive, but safety must be built in from the start. 

AI companies cannot credibly claim neutrality when their products are designed in ways that invite abuse, fail to integrate guardrails, and simply deploy under a narrow interpretation of ‘innovation at speed’. Safety by design is not optional. Consent protections are not a ‘nice to have’. And blaming users is no longer a defensible position when predictable misuse was foreseeable from day one.  

The Australian Government have signaled they understand what is at stake. Renewed attention towards legislating a digital duty of care, alongside emerging work on AI-related  harms, reflects a broader shift towards placing responsibility where it belongs – on companies with the power to prevent harm before it occurs. As Australia moves into 2026, these reforms – and the need for further action – must remain a priority, particularly where children are concerned. 
 
Grok is not under scrutiny because it is controversial. It is under scrutiny because it crossed a line. The question now is whether the AI sector learns from this moment – or whether regulators will be forced to draw much firmer boundaries on its behalf. 

About the author

Colm Gannon is the CEO of the International Centre for Missing and exploited Children (ICMEC), Australia with over 20 years’ experience in law enforcement, digital safety, and child protection. He has led national and international investigations into online harms, child sexual exploitation and abuse (CSEA), and cybercrime. Combining this expertise with a technical background in AI and software development, Colm works at the intersection of technology and child safety, advancing ICMEC Australia’s mission to protect children. 

References 

  1. eSafety Commissioner (9 January 2026). ‘eSafety raises concerns about misuse of Grok to generate sexualised content’. https://www.esafety.gov.au/newsroom/media-releases/esafety-raises-concerns-about-misuse-of-grok-to-generate-sexualised-content  
  1. Euronews (13 January 2026). ‘From bans to probes: Which countries are taking aim at Elon Musk’s Grok AI chatbot?’. https://www.euronews.com/next/2026/01/13/from-bans-to-probes-which-countries-are-taking-aim-at-elon-musks-grok-ai-chatbot 
  1. Ars Technica (14 January 2026). ‘Grok updated to stop undressing women and children amid investigations’. https://arstechnica.com/tech-policy/2026/01/musk-still-defending-groks-partial-nudes-as-california-ag-opens-probe/ 

Subscribe to the ICMEC Australia newsletter

Stay up to date with the latest news, information and activities.

ICMEC Australia acknowledges Traditional Owners throughout Australia and their continuing connection to lands, waters and communities. We pay our respects to Aboriginal and Torres Strait Islanders, and Elders past and present.

Copyright © 2024. All Rights Reserved. |  Logo by Blisttech. Design by Insil.
magnifiercrosschevron-downtext-align-justify