Australia Unveils Sweeping Ban on ‘Nudify’ and Stalking Apps to Combat AI-Driven Abuse

Canberra, Australia – In a significant move to protect its citizens from emerging online threats, the Australian government has announced a comprehensive crackdown on applications facilitating the creation of non-consensual deepfake nude imagery and undetectable online stalking tools. This initiative places a direct responsibility on technology platforms to prevent access to these “abhorrent technologies,” marking a critical advancement in the nation’s ongoing efforts to enhance online safety.

The Growing Threat of AI-Generated Harm

The proliferation of advanced artificial intelligence has introduced sophisticated tools that are increasingly being weaponized for malicious purposes. “Nudify” applications, which use AI to digitally remove clothing from images, and apps designed for insidious online stalking, are at the forefront of this concern. These technologies are not merely theoretical problems; they are causing “real and irreparable damage now,” according to Minister for Communications, Anika Wells. The rise of these platforms has led to an alarming increase in image-based abuse, particularly affecting young people. Data from a US-based advocacy group, Thorn, revealed that 10% of surveyed young people knew someone who had had deepfake nude imagery created of them, with 6% reporting being direct victims. This trend underscores the urgent need for legislative action.

Canberra’s Crackdown: Holding Tech Platforms Accountable

Minister for Communications Anika Wells stated that the government will work collaboratively with the tech industry to identify and restrict access to these harmful applications. Under the proposed reforms, tech companies will be mandated to take “reasonable steps” to block such tools, with significant fines, up to AUD $49.5 million (approximately USD $32 million), looming for non-compliance. This approach mirrors the government’s recently enacted social media ban for individuals under 16, which also places the onus on platforms to enforce age restrictions. The government aims to use “every lever” at its disposal to curb the accessibility of these abusive technologies.

Balancing Innovation with Safety: Minister’s Vision

Minister Wells emphasized that the government’s objective is to target technologies used “solely to abuse, humiliate and harm people, especially our children.” She clarified that the reforms are designed to safeguard legitimate and consent-based artificial intelligence (AI) and online tracking services, ensuring that innovation is not stifled. “There is a place for AI and legitimate tracking technology in Australia, but there is no place for apps and technologies that are used solely to abuse, humiliate and harm people,” Wells stated. This balanced approach seeks to foster technological advancement while prioritizing the safety and well-being of Australians.

eSafety Commissioner’s Alarms and Data

Australia’s eSafety Commissioner, Julie Inman Grant, has been a vocal advocate for stronger safeguards against AI-driven abuse. In recent months, the eSafety Commissioner’s office has reported a significant surge in incidents related to digitally altered intimate images, including deepfakes. Reports involving individuals under 18 more than doubled in the 18 months leading up to June, with a concerning percentage involving the targeting of women and girls. Commissioner Grant has previously warned about the rise of “declothing apps” and the creation of synthetic child sexual abuse material. The government’s current actions are informed by recommendations from the eSafety Commissioner, who has called for greater responsibility from AI tool creators, including embedding “safety by design” principles from the outset.

Strengthening the Legal Arsenal

The new measures are intended to complement existing state and federal laws that prohibit stalking and the non-consensual distribution of sexually explicit materials. While these laws provide a foundation, the rapid evolution of technology has exposed gaps. Independent MP Kate Chaney had previously proposed legislation that would criminalize the possession and use of ‘nudify’ apps, highlighting the legislative intent to address these emerging threats. The government’s move is seen as a crucial step in updating the legal framework to effectively combat technologically sophisticated forms of abuse, making it a trending news topic in national security and technology sectors.

Industry Collaboration and Responsibilities

The government’s strategy includes close collaboration with the technology industry. This partnership is vital to developing effective methods for restricting access to these harmful tools. Notably, tech giants like Meta have already taken proactive steps, including legal action against companies advertising “nudify” apps on their platforms. Meta has stated its welcome of legislation that aids in combating intimate image abuse, whether real or AI-generated, aligning with their existing efforts to prevent content spread online.

A Step Towards a Safer Digital Future

While Minister Wells acknowledges that these reforms will not “eliminate the problem of abusive technology in one fell swoop,” she expressed confidence that the move, “alongside existing laws and our world-leading online safety reforms, will make a real difference in protecting Australians.” This top priority initiative underscores Australia’s commitment to staying ahead of technological advancements and their potential misuse, aiming to create a more secure digital environment for all its citizens. The news is a significant development in the evolving landscape of digital safety technology.