
ICMEC on ABC Radio ICMEC Australia CEO, Colm Gannon, recently joined Ali Moore on ABC Melbourne Radio to discuss the growing dangers of artificial intelligence (AI). As AI technology continues to advance at an unprecedented pace, concerns around its ethical use, potential for exploitation, and impact on child protection are becoming increasingly urgent.
During the interview, Gannon highlighted the risks associated with AI-generated content, deepfakes, and the ease with which bad actors can use these tools to exploit children online. With AI now capable of creating highly realistic images, videos, and voice clones, the potential for misuse is alarming. ICMEC (International Centre for Missing & Exploited Children) is at the forefront of advocating for stronger policies and regulations to safeguard vulnerable children from these emerging threats.
One of the key concerns raised in the discussion was the challenge of detecting AI-generated child abuse material. Traditional content moderation tools struggle to keep up with AI’s ability to rapidly produce new forms of harmful content. Gannon called for urgent collaboration between governments, tech companies, and child protection agencies to address this growing crisis.
Ali Moore also questioned the role of social media platforms and their responsibilities in preventing AI-driven exploitation. Gannon emphasized that while some platforms are taking steps to regulate AI use, enforcement remains inconsistent. He stressed the need for global cooperation and proactive AI legislation to ensure child safety remains a top priority.
ICMEC Australia continues to push for policy changes and technological solutions that can help combat the dangers posed by AI. Gannon’s appearance on ABC Melbourne Radio served as a crucial reminder that while AI presents incredible opportunities, it also demands urgent safeguards to protect the most vulnerable members of society. ICMEC on ABC Radio
https://www.abc.net.au/listen/programs/melbourne-drive/drive/105050584
https://icmec.org.au/icmec-australia-in-the-news/: Listen to Colm Gannon talk with Ali Moore on 774 ABC Radio‘Nudify’ economy: AI-powered “nudify” apps are fueling a disturbing rise in non-consensual sexual deepfakes, exploiting victims without their knowledge or consent. These apps use artificial intelligence to manipulate images, generating fake explicit content that can be used for harassment, blackmail, or online abuse. Australia is witnessing a growing concern over the ease with which these tools can be accessed, often through cryptocurrency transactions that make tracking offenders more difficult.
This article from Crikey delves into how these AI-driven exploitation tools work, the legal and ethical implications, and the challenges law enforcement faces in curbing their spread. Despite increasing awareness, the rapid advancement of generative AI makes it difficult to regulate these technologies effectively. Lawmakers and advocacy groups are calling for stronger legal protections to criminalize the creation and distribution of non-consensual deepfakes and hold perpetrators accountable for their actions.
Victims often struggle with the emotional and reputational damage caused by these manipulated images, with limited legal recourse available. Many experience severe distress, as the circulation of fake explicit images can harm careers, relationships, and mental well-being. Social media platforms and online communities are under scrutiny for failing to detect and prevent the sharing of such content, raising questions about their responsibility in protecting users from online abuse.
Read the full analysis on Crikey to understand the scope of this alarming trend, the people affected, and the possible legal and technological solutions to combat AI-driven sexual exploitation..‘Nudify’ economy
ICMEC on ID Tech Podcast ICMEC on ID Tech Podcast ICMEC on ID Tech Podcast In a recent episode of the ID Talk Podcast, Colm Gannon, CEO of the International Centre for Missing & Exploited Children (ICMEC) Australia, delved into the complex interplay between technology, privacy, and child protection. The discussion underscored the imperative of harmonizing regulatory frameworks with technological advancements to bolster the fight against child exploitation.The ID Talk Podcast+4LinkedIn+4Apple Podcasts+4
ICMEC's Collaborative Approach
Gannon highlighted ICMEC's multifaceted collaboration with law enforcement agencies, regulatory bodies, and technology companies. This triad partnership is pivotal in addressing the evolving challenges of child exploitation in the digital age. By fostering these alliances, ICMEC aims to create a cohesive strategy that leverages technological innovations while ensuring robust child protection mechanisms.Audacy+3Apple Podcasts+3Apple Podcasts+3
Balancing Privacy and Protection
A significant portion of the conversation centered on the delicate balance between safeguarding individual privacy rights and protecting victims of child exploitation. Gannon emphasized that while privacy is a fundamental right, it should not serve as a shield for criminal activities. He pointed out that certain privacy regulations, in their current form, might inadvertently protect perpetrators rather than victims, thereby necessitating a reevaluation to ensure that victim protection remains paramount. ID Tech Wire+1Apple Podcasts+1
Role of Emerging Technologies
The integration of technologies such as facial recognition and artificial intelligence (AI) in investigative processes was a focal point of the discussion. Gannon elaborated on how these tools can significantly enhance the efficiency and effectiveness of identifying and apprehending offenders. However, he also cautioned about the ethical considerations and potential biases inherent in AI systems, advocating for responsible adoption with appropriate oversight and governance.LinkedIn+2Apple Podcasts+2Apple Podcasts+2
Regulatory Developments and Challenges
The conversation also touched upon recent regulatory developments, including the European Union's AI Act. Gannon discussed how such regulations impact the deployment of AI and other technologies in law enforcement. He stressed the importance of crafting regulations that do not stifle innovation but instead promote the ethical use of technology in protecting vulnerable populations.Apple Podcasts+2LinkedIn+2Apple Podcasts+2
Conclusion
Gannon's insights shed light on the intricate dynamics between technological innovation, regulatory frameworks, and child protection efforts. The podcast episode serves as a call to action for stakeholders across sectors to collaborate, ensuring that advancements in technology are harnessed effectively and ethically to combat child exploitation.
For a more in-depth understanding, you can listen to the full episode of the ID Talk Podcast featuring Colm Gannon.
Anna Bowden's abuser had to get close to her when she was a child before the sexual exploitation began.
Today, people like her perpetrator can victimise Australian kids from anywhere. In fact, they can do it with the click of a button.
The advancement and accessibility of AI technology has triggered a "tidal wave" of sexually explicit 'deepfake' images and videos, and children are among the most vulnerable targets.
"Accessing and using AI software to create sexual deepfake images is alarmingly easy," Jake Moore, Global Cybersecurity Advisor at ESET, tells 9honey.
From 2022 to 2023, the Asia Pacific region experienced a 1530 per cent surge in deepfake cases, per Sumsub's annual Identity Fraud Report.
One platform, DeepFaceLab, is responsible for about 95 per cent of deepfake videos and there are free platforms available to anyone willing to sign up with an email address.
They can then use real photos of the victim (usually harmless snaps from social media accounts) to generate whatever AI image they want; in about 90 per cent of cases, those images are explicit, according to Australia's eSafety Commissioner.
"We've got cases of deepfakes and people's faces being used in images which are absolutely and utterly horrific," reveals Bowden, CEO at the International Centre for Missing & Exploited Children (ICMEC) Australia.
This technology wasn't around when she was abused and it horrifies her to know it's already being used to victimise Aussie kids, many of whom have no idea they're at risk.
"We don't talk about it," she says. "There's no information. There's no idea of what offenders are doing, what we need to look out for.
"We're helping criminals because we're not communicating."
Despite the spike in deepfake cases, 79 per cent of Aussie social media users confessed they struggle to identify AI-generated content online, per a 2024 McAfee survey.
Many parents don't understand the power of AI and the dangers it can pose, so they don't know how to protect or educate their kids.
The worst part is that sometimes children can be perpetrators too.
ESET has recognised a surge in teen AI sextortion cases where teens are generating non-consensual, explicit images and videos of their peers to impress, bully, or intimidate others.
"Constant exposure to online content has desensitised many young individuals, reducing their understanding of the real-world consequences of their actions," Moore explains.
Teen perpetrators can be punishable under Australian law, but the victim may experience shame, fear, humiliation, loss of self-esteem, financial loss, and damage to their social standing.
"Just because those images or videos are AI generated does not mean they're harmless," Bowden says.
This kind of image-based abuse can cause mental and emotional distress, and some victims die by suicide. The worst part is that the deepfakes may never go away.
Advocate Noelle Martin was 18 when she discovered fake non-consensual images of her online and years later, the photos and videos are still online.
"This can destroy someone's entire identity and reputation, their name, and image, and self-determination, and dignity. It can define that person forever," she told 9honey.
A deepfake image or video of a child can spread rapidly and can be almost impossible to have removed from the internet.
Most social media platforms ban non-consensual explicit content, but AI-generated pornography slips through due to the sheer volume of content being posted.
"This makes it difficult for moderators to remove such content quickly enough, while users are swift to share, save, and screenshot these images," Moore notes, revealing they're often shared through private channels.
Earlier this year social media sites struggled to remove an explicit Al-generated image of Taylor Swift from their platforms.
The images were viewed more than 45 million times in 17 hours.
Now imagine if the target was your child, not the biggest celebrity in the world.
It could end up in the hands of child predators seeking out explicit content, who could then use the deepfake to generate even more vile non-consensual photos and videos bearing the face of an Aussie child they've never met.
Bowden describes it as "incredibly traumatic" and warns "it could be around for decades, if not longer.
Thankfully, organisations like ICMEC, ESET, and the Australian eSafety Commissioner are working to combat this.
"It's evolving so quickly that as we create solutions, the technology changes, and then we need a new solution again," Bowden says.
She works with Australian organisations, government and law enforcement to prevent deepfakes and other child sexual exploitation online, but says parents need education too.
That means learning about the risks of AI, explaining them to children in age appropriate ways, and teaching children basic online safety precautions.
It's an uncomfortable conversation and one many Aussies would rather not have, but silence won't help.
"It's just terrifying how low the level of awareness is and how high the risk is," Bowden adds. "We cannot keep just hoping this is going to go away."
Moore advises parents to maintain open communication so their children know they can speak up if something happens.
"If a child becomes a target of AI-generated porn, parents have to remain calm and reassure the child that they are not at fault," Moore adds.
"It is crucial to document all evidence and report the incident to the relevant authorities, such as the school and the police [and] contact the platform hosting the content to request its removal."
AI isn't going to go away, but deepfakes can be tackled and eradicated.
That will require an increase in awareness about the dangers and legal implications of sharing sexual content online, as well as tougher regulations to deter potential offenders, and more accountability from online services.
It will take work and the landscape of AI is changing rapidly, but Bowden is certain that "if the good guys all get together and want to make change, we can outnumber the bad guys".

ICMEC Australia acknowledges Traditional Owners throughout Australia and their continuing connection to lands, waters and communities. We pay our respects to Aboriginal and Torres Strait Islanders, and Elders past and present.