Can Australia go it alone on combating deepfake porn?

New draft industry rules want to force the likes of Google, Meta and Apple to spend more combating illegal AI-generated material – but how much real power will it have? Alex Kidman reports

There’s growing pressure worldwide to properly regulate AI, whether it’s UK PM Rishi Sunak looking to put “guardrails” on it, the EU slowly inching its way towards AI regulation or China’s absolutely expected top-down regulatory regime.

Australia’s probably not a place you think of as being rich in the AI world – more a place built on sheep and iron ore exports for most – but it’s been at the forefront of AI regulation, with new draft rules proposed that could (in theory) see some of the tech industry’s giants and heaviest investors in AI bend the knee.

Specifically, the Australian Online Safety Act, which has been in place since early 2022, is being further enhanced with draft codes proposed covering relevant electronic services and designated internet services.

However, while other codes of practice under the bill, covering social media services, internet carriage services, equipment providers, app distribution services and hosting services were developed by the industry itself, the new draft codes have instead been drawn up by the eSafety Commission.

Why? The two codes developed for relevant electronic services and designated internet services were declined on 31 May 2023 for registration by Australia’s eSafety Commissioner, Julie Inman Grant, because, as per the eSafety Commision, “they did not contain appropriate community safeguards for users in Australia”.

Under the provisions of the office, the eSafety commissioner has now developed draft standards that are open for public consultation until 21 December 2021. No doubt the affected parties – and we are talking about some of the biggest players not only in AI but in the online space (though the distinction is notably blurry for many of those biggest businesses) – will have their own submissions ready to go by then.

What’s being proposed?

Inman Grant wants to see a more regulated approach to the detection of problematic material, but she insists that this should be able to sidestep the concerns that are often raised around breaking encryption to do so.

“eSafety takes the privacy of all Australians very seriously so I want to be very clear on this – eSafety is not requiring companies to break end-to-end encryption through these standards nor do we expect companies to design systematic vulnerabilities or weaknesses into any of their end-to-end encrypted services,” she said in a statement.

Australia’s eSafety Commissioner Julie Inman-Grant
Australia’s eSafety Commissioner Julie Inman-Grant – Source

“But operating an end-to-end encrypted service does not absolve companies of responsibility and cannot serve as a free pass to do nothing about these criminal acts.“

The eSafety commissioner notes that tools already exist that can be used in the fight against objectionable material, including that which has been modified or remixed by AI.

“There are already widely available tools, like Microsoft’s PhotoDNA, used by over 200 organisations and most large companies, that automatically match child sexual abuse images against these databases of “known” and verified material,” she said. “PhotoDNA is not only extremely accurate, with a false positive rate of one in 50 billion, but is also privacy protecting as it only matches and flags known child sexual abuse imagery. “

She also points out that encrypted services commonly scan parts of messages to flag potentially problematic content.

“Meta’s end-to-end encrypted WhatsApp messaging service already scans the non-encrypted parts of its service including profile and group chat names and pictures that might indicate accounts are providing or sharing child sexual abuse material. These and other interventions enable WhatsApp to make one million reports of child sexual exploitation and abuse each year. This is one example of measures companies can take.” 

Under the proposed draft legislation, encrypted services would not have their own exemption category, but it would be up to companies to show that it was not technically feasible to detect material to avoid falling foul of the law. According to the draft fact sheet, “matters to consider in relation to technical feasibility include whether it is reasonable for a service provider to incur the costs of taking action, having regard to the level of risk to the online safety of end-users”.

The draft standards and process for submissions can be found here.

Can Australia’s work make a difference to the broader online world?

That’s the question, isn’t it?

There’s no doubt that the fight to combat this kind of material is a necessary one, and it’s good to see Australia take a leading position here.

The devil, as always, will be in both the detail and the enforcement. The new drafts are proposed to be enacted six months after registration, which would place them roughly mid-2024 at the earliest, presuming further consultation is not sought. Six months is a long time in AI, but it’s also important to get the details right to ensure a regulatory regime that is robust and able to deal with both current issues and any emergent concerns down the track.

Enforcement is the other difficult area, and you don’t have to look too far in the online enforcement space in Australia to see where it can seemingly go off the rails.

X, the service formerly known as Twitter, has already fallen foul of the new online safety act, having failed to properly respond to questions about how it detects and removes problematic material set to it by the eSafety Commissioner back in February.

For its lack of response, eSafety Commissioner Julie Inman Grant fined the company $610,500 on 16 October on a 28-day deadline. As yet, there’s no sign of X formally requesting the penalty be withdrawn or supplying the requested information, albeit many months late.

This leaves it open for the commissioner to take the matter to court and fine Twitter a further $780,000 per day for non-compliance – but with what appears to be zero staff in Australia and larger pressing financial concerns, it’s not clear if X will pay much attention.

For the fun trivia folk, one of Ms Inman Grant’s former employers, from 2014 to 2016 was… Twitter.

Then again, under new owner Elon Musk – a man with a, shall we say, spotty history of paying his bills – a great many staff were laid off, so this perhaps does not give her much of an “in” with the business in terms of getting satisfaction from them.

Avatar photo
Alex Kidman

Alex Kidman is an award-winning Freelance Journalist, based in Australia. In a career spanning more than 25 years, he's been an editor at CNET, Gizmodo, Finder, PC Mag Australia and APCMag. He's the co-host of Vertical Hold: Behind The Tech News, a podcast breaking down the big tech stories of the week.

NEXT UP