AI, Data & Tech ... Simplified!

Summary of Australian integrated AI regulation approach

Ali Mirzaei

October 30, 2024

Implementing AI regulations is a crucial step for Australia, aligning us with global leaders to ensure AI is safe, ethical, and transparent. This article summarises the current Australian AI regulatory guardrails for a quick view.

Keywords

For AI system developers or deployers working within Australia or with Australian industries/government, "Safe and Responsible AI in Australia" (currently 69 pages) provides a comprehensive regulatory framework. I have summarised the key points here for a quick overview.

 

The anticipated regulations in each country/region are crucial, as they will substantially affect technical, legal, and executive processes within the AI industry both domestically and globally.

 

 

𝗢𝘃𝗲𝗿𝘃𝗶𝗲𝘄

 

This document:

  • is currently in confirmation process. I have condensed the content to retain the main points of September 2024 version, for further details refer to the full document here (PDF).

  • reflects/refers to approaches mainly from the EU and Canada (also UK and US) to maintain international consistency.
  • defines high-risk settings, 10 mandatory guardrails, and 3 possible approaches to mandate the guardrails.
  • differentiates between narrow AI systems and general-purpose AI (GPAI) models.

 

 

𝗗𝗲𝗳𝗶𝗻𝗶𝗻𝗴 𝗵𝗶𝗴𝗵-𝗿𝗶𝘀𝗸 𝗔𝗜

 

Category 1: in which uses of the AI system or GPAI model are known or foreseeable and assessed as high-risk. The risk assessment for this category is based on the following principles, considering the adverse impacts to:

 

a.  an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations

 

b.  an individual’s physical or mental health or safety

 

c.  legal effects, defamation or similarly significant effects on an individual

 

d.  groups of individuals or collective rights of cultural groups

 

e.  the broader Australian economy, society, environment and rule of law

 

f.  severity and extent of those adverse impacts outlined in principles (a) to (e) above.

 

Category 2: advanced GPAI models where all possible applications and risks cannot be foreseen. GPAI is defined as:

 

𝘈𝘯 𝘈𝘐 𝘮𝘰𝘥𝘦𝘭 𝘵𝘩𝘢𝘵 𝘪𝘴 𝘤𝘢𝘱𝘢𝘣𝘭𝘦 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘶𝘴𝘦𝘥, 𝘰𝘳 𝘤𝘢𝘱𝘢𝘣𝘭𝘦 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘢𝘥𝘢𝘱𝘵𝘦𝘥 𝘧𝘰𝘳 𝘶𝘴𝘦, 𝘧𝘰𝘳 𝘢 𝘷𝘢𝘳𝘪𝘦𝘵𝘺 𝘰𝘧 𝘱𝘶𝘳𝘱𝘰𝘴𝘦𝘴, 𝘣𝘰𝘵𝘩 𝘧𝘰𝘳 𝘥𝘪𝘳𝘦𝘤𝘵 𝘶𝘴𝘦 𝘢𝘴 𝘸𝘦𝘭𝘭 𝘢𝘴 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘨𝘳𝘢𝘵𝘪𝘰𝘯 𝘪𝘯 𝘰𝘵𝘩𝘦𝘳 𝘴𝘺𝘴𝘵𝘦𝘮𝘴.

 

The Australian Government proposes to apply mandatory guardrails to all GPAI models.

 

 

𝟭𝟬 𝗺𝗮𝗻𝗱𝗮𝘁𝗼𝗿𝘆 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀 𝗳𝗼𝗿 𝗵𝗶𝗴𝗵-𝗿𝗶𝘀𝗸 𝘀𝗲𝘁𝘁𝗶𝗻𝗴𝘀

 

The goal of mandatory guardrails is to ensure:

  • Testing to meet performance metrics
  • Transparency on development process and application
  • Accountability for governance

 

These guardrails for both developers and deployers are:

 

1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance. Organisations must make their accountability processes publicly available which covers:

  • a documented approach to regulatory compliance
  • policies for data and risk management
  • clear roles, responsibilities and reporting structures for staff
  • details of the training organisations make available to staff

 

2. Establish and implement a risk management process to identify and mitigate risks arising from a high-risk AI system using the high-risk principles. This process includes assessing the impacts of risks, identification, application and monitoring of mitigation measures and mechanisms to identify new risks.

 

3. Protect AI systems, and implement data governance measures to manage data quality and provenance, data privacy and cybersecurity to ensure reliability of an AI model and unbiased discriminatory outputs.

 

4. Test AI models and systems to evaluate model performance and monitor the system once deployed (supported by methodologies outlined in known standards).

 

5. Enable human control or intervention in an AI system to achieve meaningful human oversight. AI’s operations and outputs should be reversable by a human if necessary.

 

6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content, how and where they have been used in a clear and accessible manner.

 

7. Establish processes for people impacted by AI systems to challenge use or outcomes, including internal complaint handing functions and human staff

 

8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. This includes transparent application guidelines from developers and failure reports from deployers.

 

9. Keep and maintain records to allow third parties to assess compliance with guardrails, including AI system description, design specifics, capabilities and limitations, testing methodologies and results, datasets details, risk management processes and human oversight measures.

 

10. Undertake conformity assessments to demonstrate and certify compliance with the guardrails, before placing a high-risk AI system, carried out by the developers themselves, by a third-party or by government entities or regulators. Organisations will need to periodically (or in case of a system change impacting compliance) repeat the assessment to ensure continued compliance.

 

Notes:

 

  • Developers and deployers will need to consider who their end-users are.
  • Users need to meet any legal obligations under existing laws.

 

 

𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗼𝗽𝘁𝗶𝗼𝗻𝘀 𝘁𝗼 𝗺𝗮𝗻𝗱𝗮𝘁𝗲 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀:
 

The proposal paper outlines three potential regulatory models to implement the guardrails:

 

1. A domain specific approach – Adopting the guardrails within existing regulatory frameworks as needed (sector-by-sector basis review of each relevant piece of legislation)

 

2. A framework approach – Introducing new framework legislation to adapt existing regulatory frameworks across the economy

 

3. A whole of economy approach – Introducing a new cross-economy AI-specific Act

 

 

Final notes:

 

  • This document proposes treating national security and defence applications separately from civilian applications, similar to US and EU.
  • See the original document for further details.
  • Future versions of this document may update or modify the contents outlined above.

 

 

 

share / comment on: