News Reaction – Australia seeks views on ‘safe and responsible’ AI regulation
The evolving use of AI systems presents new cyber security and safety risks that need to be managed. Amidst plans for regulating AI being finalised in other major jurisdictions such as the EU and the UK, Australia recently took stock of its approach through a public discussion paper.
Through the support of industry, academia and wider AI ecosystem, this paper seeks guidance on the proactive measures Australia can take to promote responsible AI governance and mitigate any potential risks.
Responses to AI vary from country to country, from voluntary guidelines and government investment in R&D to regulatory approaches with proposed new AI laws. While the Australian Government does not set out a preferred approach, it is clear that any further regulatory (or other) intervention should:
- Ensure there are appropriate safeguards, especially for high-risk applications of AI and automated decision making
- Provide greater certainty and make it easier for businesses to confidently investin AI-enabled innovations
- Promote international harmonisation, so that Australia can take advantage of AI-enabled systems supplied on a global scale and foster the growth of AI in Australia
Here, Regional Director, Tim Dillon, comments on the key points from NCC Group’s submission to the paper.
The Australian Government's endeavours to build public trust and enable society to reap the benefits of AI are very welcome. As it finalises its approach to AI regulation, we believe there are a number of practical steps the Government should take, including:
- AI will never be zero risk if we wish to pioneer its development. As such, Australia should define its risk appetite so that red lines with regards to AI systems and their security, safety and resilience are known.
End-users should be empowered to make decisions about the AI systems they use by improving transparency of where and how AI technologies are being deployed. A Government-led consumer labelling scheme, backed up by independent third-party product validation (particularly for higher-risk products), should be introduced.
Flexibility, agility and periodic regulatory and legislative reviews should be built in from the outset to keep pace with technological and societal developments.
- In assuming a greater role in regulating the use of AI, regulators should be strengthened in their powers, resources and capabilities.
- The Government must focus investment on developing the skills we need to make its regime a success.
- If the Government wants to ensure that Australian languages, religious outlooks, values and cultural references are protected, while also minimising the risk of adopting biases seen elsewhere in the world, steps must be taken to make Australian datasets more readily available for use in AI.
- A clear route map for the development of technical standards should be laid out.
The Australian Government will use the feedback received to consider whether there are any gaps in its governance of AI, working across the public sector to develop a coordinated response.
NCC Group is passionate about sharing our insights from operating at the ‘coalface’ of cyber security with policymakers, so that they can make informed decisions about the regulation of emerging technologies. We look forward to continuing to engage with the Australian Government, and policymakers globally, as their AI governance frameworks are finalised and implemented.