Australian government vows to make AI safer
The Australian government has pledged to introduce mandatory safeguards for “high-risk” AI as it tries to strike a balance between fostering innovation and managing concerns related to safety and responsible use of AI systems.
The pledge comes in response to a consultation it launched into safe and responsible AI last year. This received more than 500 submissions.
“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI,” Australian Minister for Industry and Science Ed Husic said off the back of the federal government’s interim response to the industry consultation.
The response noted that only a third of Australians agree the country has adequate guardrails to make the design, development and deployment of AI safe.
“While AI is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly,” the paper said. “This acts as a handbrake on business adoption and public acceptance.”
What the Australian government will do about AI (and what it won’t do)
While the government mulls over what those possible mandatory guardrails will look like – whether it will be through changes to existing laws or creating new AI-specific laws – it pledged that those guardrails will aim to “promote the safe design, development and deployment of AI systems” related to testing, transparency and accountability. It also promised that the vast majority of “low-risk” AI use can continue.
In addition, the response outlined the immediate actions the government will take. This includes working with industry to develop a voluntary AI safety standard, voluntary labelling and watermarking of AI-generated materials, and establishing an expert advisory group to support the development of options for mandatory guardrails.
“We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” Husic said.
In its response, the government added it will monitor how other countries respond to AI challenges, including initial efforts in the European Union, United States and Canada.
Late last year, the European Union agreed on a landmark Artificial Intelligence Act, which will ban the use of AI for high-risk activities such as biometric surveillance, social scoring and untargeted scraping of facial images.
Related reading: Can Australia go it alone on combating deepfake porn?
NEXT UP
The biggest challenges to the video streaming industry – and how to fix them
Andrew Bunten outlines some of the biggest challenges faced by the video streaming sector, as well as some ideas on how to address them.
Jeff Smith SVP of Strategic Partnerships at Skipify: “Traditional finance and banking can learn to embrace disruptors as partners and enablers instead of competitors and threats”
Jeff Smith is the SVP of Strategic Partnerships at Skipify, a San Francisco-based fintech company on a mission to redefine the checkout experience
Optus appoints Stephen Rue as new CEO
Optus appoints Stephen Rue as the new permanent CEO as well as a new governance structure for him to operate under.