A Bill regarding deepfake abuse has been drawn from the biscuit tin, sparking conversation around digital safety. The rise of AI sees the technology used and misused in many ways.
Deepfakes are images, videos or audio that have been generated or edited using AI and similar software. The synthetic media form may depict real people or fictional characters. You can hardly scroll through social media without seeing AI-generated videos like this.
The digital manipulation of someone’s likeness can have damaging consequences. Several celebrities have been embroiled in deepfake scandals. Hyper-realistic fake videos of famous faces are used as virtual friends and lovers, pivotal tools in money and data scams, and promotional material, among other uses. In January 2025, one case involved a French woman being conned out of over €830,000, approximately $1.6 million NZD, by scammers posing as actor Brad Pitt.
Deepfakes are increasingly being used to fabricate ‘proof’ for false news stories, discredit public figures whose careers rely on reputation and undermine trust in political, health and electoral communications. A 2019 report by Deeptrace (now Sensity) found that about 96% of the 14,678 deepfake videos it identified online were pornographic.
They also raise serious ethical concerns as people’s faces are being used without consent in deepfake pornography. It is not going away. And now, these tools are being weaponised to commit crimes that have entered classrooms.
The Deepfake Digital Harm and Exploitation Bill would amend the Crimes Act 1961 and the Harmful Digital Communications Act 2015 to expand the definition of ‘intimate visual recording’ to explicitly include images or videos that are created, synthesised, or altered to depict a person’s likeness in intimate contexts without their consent.
“When deepfakes are explicit and generated without consent, it is a harmful and traumatic form of abuse,” ACT MP Laura McClure, the member to whom the Bill belongs, says. “The harm is real and it’s happening right now.”
She says that Netsafe receives complaints daily from people whose images have been weaponised to create sexually explicit deepfake abuse. “Since I lodged my bill, I’ve heard from victims who’ve had their lives derailed by deepfake abuse,” she says.
“Parliament now has the opportunity to empower victims of deepfake abuse with a clear pathway toward the removal of the images and the prosecution of their abusers.”
She says the Bill doesn’t look to ban AI or create a new regulatory regime, but is instead a straightforward amendment to existing laws to address a specific, well-defined and understood problem.