Australian government vows to make AI safer

The Australian government has pledged to introduce mandatory safeguards for “high-risk” AI as it tries to strike a balance between fostering innovation and managing concerns related to safety and responsible use of AI systems.

The pledge comes in response to a consultation it launched into safe and responsible AI last year. This received more than 500 submissions.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI,” Australian Minister for Industry and Science Ed Husic said off the back of the federal government’s interim response to the industry consultation.

The response noted that only a third of Australians agree the country has adequate guardrails to make the design, development and deployment of AI safe.

“While AI is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly,” the paper said. “This acts as a handbrake on business adoption and public acceptance.”

What the Australian government will do about AI (and what it won’t do)

While the government mulls over what those possible mandatory guardrails will look like – whether it will be through changes to existing laws or creating new AI-specific laws – it pledged that those guardrails will aim to “promote the safe design, development and deployment of AI systems” related to testing, transparency and accountability. It also promised that the vast majority of “low-risk” AI use can continue.

In addition, the response outlined the immediate actions the government will take. This includes working with industry to develop a voluntary AI safety standard, voluntary labelling and watermarking of AI-generated materials, and establishing an expert advisory group to support the development of options for mandatory guardrails.

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” Husic said.

In its response, the government added it will monitor how other countries respond to AI challenges, including initial efforts in the European Union, United States and Canada.

Late last year, the European Union agreed on a landmark Artificial Intelligence Act, which will ban the use of AI for high-risk activities such as biometric surveillance, social scoring and untargeted scraping of facial images.


Related reading: Can Australia go it alone on combating deepfake porn?


Aimee Chanthadavong
Aimee Chanthadavong

Aimee Chanthadavong has been a journalist, editor and content producer for more than a decade. During that time she's covered enterprise technology for premium websites such as ZDNet and InnovationAus as well as food and travel for Broadsheet and SBS.

NEXT UP