Much was said at the AI Safety Summit, writes Sydney-based Alex Kidman, but it’s time for the Australian government to fill in the detail
At last week’s AI Safety Summit in Bletchley Park, 28 countries agreed to the Bletchley Declaration. These lay down a framework – a very loose one – to mitigate the risks associated with AI development. All very well and good, and exactly the kinds of summits you see politicians smiling and waving to cameras at all the time.
Australia is a signatory to the Bletchley agreement – thanks to the way the alphabet works, we’re even first on the list – with the Albanese Labor government represented by Deputy PM Richard Marles and the Minister for Industry and Science Ed Husic.
The official Australian government release quotes Husic as stating that while AI has its definite benefits, “there are real and understandable concerns with how this technology could impact our world. We need to act now to make sure safety and ethics are in-built. Not a bolt-on feature down the track.”
He’s right. The scope for the Bletchley Declaration to lead a bright, shiny and above all safe new AI-driven world is certainly there. But, as has already been noted, there’s the pesky little problem that none of it is legally binding.
Companies might develop the next Skynet accidentally, and while we might all be a tad busy running from the T-1000s when that happens, when the dust settles they won’t have broken any laws. Or at least, not yet.
This is where the relevant countries, including Australia, need to step up. Sadly, there’s not a lot of detail about how Australia specifically will work to grow a market predicted to bring hundreds of billions of dollars to the Australian economy by the end of the decade while still stopping that whole AI-led destruction of humanity business.
The Australian government’s announcement lauds the UK for setting up an AI safety institute (to which Australia will “collaborate” in some undefined capacity) and notes that the CSIRO’s Chief Scientist will sit on the panel overseeing an annual Frontier AI State of the Science report.
Privacy laws and AI
All of this is again commendable, but doesn’t get us any closer to regulation that might actually control what companies do with AI in the meantime. It’s a complex matter that’s likely to intertwine with another piece of forthcoming legislation that could have significant impacts on the use and development of AI down under, as the government looks to revise Australia’s privacy laws.
To say these are a touch out of date would be a huge understatement, given the act it’s looking to revise was laid down in 1988. Back when the only AI living in your computer might have been a copy of Little Computer People.
The revision of the Privacy Act could work hand-in-hand with safer AI measures, given the government has already agreed to most of the principles laid down in the Privacy Act Review.
From a business standpoint, however, you’re probably more likely to hit the revised act – expected to be legislated in 2024 – around matters such as the removal of the exemption of small businesses or the proposed way it will enforce data breach notifications in a timely matter – long before you hit AI matters. But that doesn’t mean that they should be ignored or indeed sacrificed on the altar of economic expediency.
It’s just one thread of course in the complex web that needs to be woven into any legislation that has to consider AI. Given the rapid pace of AI development, that’s likely to be most legislation moving forward.
It’s expected that the revised privacy act will be legislated (or at least attempted to be legislated) in 2024, by which time there will be further planned global AI safety summits ahead.
Maybe the future ones will deliver more on the detail and less on the shiny-sounding in-principle announcements? We’ll have to wait and see.
Nathalie Parent, Chief People Officer at Shift Technology: “HR is the conscience of an organisation”
For more than 30 years, Nathalie Parent has led global HR teams, working primarily with software companies. Today she’s Chief People Officer at Shift Technology
Amazon introduces new storage class that makes it cheaper to store rarely used files
Robot carers are real, but caregiving has bigger problems, writes Richard Trenholm in this FlashForward edition