Briefly
Australia’s eSafety Commissioner flagged a spike in complaints about Elon Musk’s Grok chatbot creating non-consensual sexual photos, with experiences doubling since late 2025.
Some complaints contain potential youngster sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
The issues come as governments worldwide examine Grok’s lax content material moderation, with the EU declaring the chatbot’s “Spicy Mode” unlawful.
Australia’s unbiased on-line security regulator issued a warning Thursday concerning the rising use of Grok to generate sexualized photos with out consent, revealing her workplace has seen complaints concerning the AI chatbot double in latest months.
The nation’s eSafety Commissioner Julie Inman Grant stated some experiences contain potential youngster sexual exploitation materials, whereas others relate to adults subjected to image-based abuse.
“I am deeply involved concerning the rising use of generative AI to sexualise or exploit folks, notably the place youngsters are concerned,” Grant posted on LinkedIn on Thursday.
The feedback come amid mounting worldwide backlash towards Grok, a chatbot constructed by billionaire Elon Musk’s AI startup xAI, which might be prompted instantly on X to change customers’ pictures.
Grant warned that AI’s potential to generate “hyper-realistic content material” is making it simpler for unhealthy actors to create artificial abuse and tougher for regulators, legislation enforcement, and child-safety teams to reply.
In contrast to rivals corresponding to ChatGPT, Musk’s xAI has positioned Grok as an “edgy” various that generates content material different AI fashions refuse to provide. Final August, it launched “Spicy Mode” particularly to create express content material.
Grant warned that Australia’s enforceable trade codes require on-line providers to implement safeguards towards youngster sexual exploitation materials, whether or not AI-generated or not.
Final 12 months, eSafety took enforcement motion towards widely-used “nudify” providers, forcing their withdrawal from Australia, she added.
“We have now entered an age the place corporations should guarantee generative AI merchandise have acceptable safeguards and guardrails in-built throughout each stage of the product lifecycle,” Grant stated, noting that eSafety will “examine and take acceptable motion” utilizing its full vary of regulatory instruments.
Deepfakes on the rise
In September, Grant secured Australia’s first deepfake penalty when the federal courtroom fined Gold Coast man Anthony Rotondo $212,000 (A$343,500) for posting deepfake pornography of distinguished Australian girls.
The eSafety Commissioner took Rotondo to courtroom in 2023 after he defied removing notices, saying they “meant nothing to him” as he was not an Australian resident, then emailing the pictures to 50 addresses, together with Grant’s workplace and media shops, in accordance with an ABC Information report.
Australian lawmakers are pushing for stronger protections towards non-consensual deepfakes past present legal guidelines.
Unbiased Senator David Pocock launched the On-line Security and Different Laws Modification (My Face, My Rights) Invoice 2025 in November, which might permit people sharing non-consensual deepfakes to be fined $102,000 (A$165,000) up-front, with corporations going through penalties as much as $510,000 (A$825,000) for non-compliance with removing notices.
“We are actually residing in a world the place more and more anybody can create a deepfake and use it nonetheless they need,” Pocock stated in a assertion, criticizing the federal government for being “asleep on the wheel” on AI protections.
Typically Clever Publication
A weekly AI journey narrated by Gen, a generative AI mannequin.