AI Regulation in Australia: Where Things Stand in March 2026
Keeping track of AI regulation in Australia right now feels like watching a construction site from across the street. There’s clearly a lot of activity, things are taking shape, but it’s not entirely clear what the finished building will look like.
Multiple regulatory instruments are in various stages of development. Federal, state, and sector-specific regulators are all moving, sometimes in coordination and sometimes independently. For businesses building or deploying AI, understanding what’s happening, what’s likely coming, and what’s actually enforceable today is essential for planning and risk management.
Here’s where things stand as of March 2026.
The Federal Framework
Voluntary AI Ethics Principles
Australia’s eight AI Ethics Principles, published by the Department of Industry, Science and Resources, remain the centrepiece of the federal approach. These principles — covering accountability, transparency, fairness, privacy, human oversight, safety, contestability, and reliability — are voluntary guidelines rather than enforceable regulations.
The voluntary nature has been both criticised and defended. Critics argue that voluntary principles are toothless and allow organisations to claim adherence without meaningful compliance. Defenders argue that the rapid pace of AI development makes rigid regulation premature and that voluntary principles provide the flexibility for industry to innovate while working toward responsible practices.
In practice, the principles are widely cited in corporate AI policies and governance documents. They’re referenced in government procurement requirements. And they’re increasingly used by regulators as a benchmark for assessing AI deployment practices, even though they’re not directly enforceable.
Mandatory Guardrails Consultation
The federal government has been consulting on mandatory guardrails for AI in high-risk settings. The consultation papers released in late 2025 proposed a framework that would require organisations deploying AI in specific high-risk domains — healthcare, employment, financial services, government services, and critical infrastructure — to meet minimum standards for testing, transparency, and human oversight.
The proposed approach draws heavily on the EU AI Act’s risk-based framework but is explicitly designed to be less prescriptive. Where the EU Act defines specific categories of prohibited and high-risk AI applications, the Australian approach focuses on outcomes and principles rather than detailed technical requirements.
Industry submissions to the consultation were mixed. Technology companies generally supported the principles-based approach but expressed concern about regulatory uncertainty. Consumer advocacy groups pushed for stronger, more prescriptive requirements. Industry associations sought clarity on which applications would be classified as high-risk.
The finalised mandatory framework is expected to be announced in mid-2026, with implementation timelines likely extending into 2027. Organisations deploying AI in potentially high-risk domains should be preparing now, even without the final details.
Sector-Specific Regulation
Financial Services
APRA (the Australian Prudential Regulation Authority) has been the most active sector regulator on AI. Their guidance on the use of AI in prudential decision-making requires regulated entities to maintain human oversight of AI-driven decisions, document model governance processes, and demonstrate that AI models don’t introduce unfair bias into lending, insurance, or other financial decisions.
The ASIC has focused on AI in financial advice and market operations. Their position is that existing financial services regulations apply to AI-driven advice and trading in the same way they apply to human-driven activities. If an AI system provides personal financial advice, it must comply with the same licensing, disclosure, and quality requirements as a human adviser.
Healthcare
The Therapeutic Goods Administration (TGA) has clarified that AI-based medical devices — software that diagnoses, recommends treatment, or makes clinical decisions — are subject to the same regulatory framework as traditional medical devices. This includes pre-market assessment, quality management requirements, and post-market surveillance.
For AI applications that support but don’t replace clinical decision-making, the regulatory position is less clear. Tools that help clinicians access information, summarise records, or manage administrative tasks don’t currently fall under TGA regulation, but the boundary between clinical decision support and clinical decision making is blurry.
Employment
Fair Work and workplace discrimination regulators have flagged AI in hiring and workforce management as an area of concern. AI-powered resume screening, automated interview analysis, and algorithmic workforce scheduling all raise questions under existing anti-discrimination law.
The key principle is that using AI doesn’t create a defence against discrimination claims. If an AI hiring tool systematically disadvantages applicants based on protected attributes — even if no human intended that outcome — the organisation using the tool may still be liable.
Privacy and Data Protection
The interaction between AI and privacy law remains one of the more complex regulatory areas. The Privacy Act review, which has been progressing for several years, includes provisions that will directly affect AI development and deployment.
Key privacy considerations for AI include:
Training data: If AI models are trained on personal information, the collection, use, and disclosure of that information is subject to the Privacy Act. Organisations need to ensure they have appropriate legal basis for using personal information in AI training data, which may require consent, contractual provisions, or reliance on specific exemptions.
Inference as personal information: When an AI model infers characteristics about an individual — their creditworthiness, health risk, purchasing intent — those inferences may themselves constitute personal information subject to privacy protection. The implications of this interpretation are still being worked through.
Automated decision-making transparency: Proposed Privacy Act reforms include provisions for individuals to be informed when significant decisions about them are made using automated processes, and to request human review of those decisions.
State and Territory Activity
NSW has been particularly active, establishing an AI Review Committee and publishing guidance on AI use in government services. The NSW approach emphasises transparency — government agencies using AI to make or inform decisions about individuals must be able to explain how those decisions were made.
Victoria’s approach has focused on AI in public services and healthcare, with specific guidance for local government and health services deploying AI tools.
Queensland has emphasised AI ethics training for public servants and has established an AI advisory body to guide government AI adoption.
What Businesses Should Do Now
The regulatory landscape is unsettled, but there are sensible steps that any organisation deploying AI should take now, regardless of how the specific regulations develop.
Document your AI governance framework. Have clear policies about how AI tools are selected, tested, deployed, and monitored. These policies should address bias testing, human oversight, data handling, and incident response.
Understand your high-risk applications. If you’re using AI in hiring, lending, healthcare, or government services, assume that mandatory requirements are coming and start building compliance capabilities now.
Maintain transparency. Be prepared to explain how your AI systems work to regulators, customers, and affected individuals. “The algorithm decided” is not an acceptable explanation.
Test for bias. Regularly test AI systems for discriminatory outcomes across protected attributes. Document the testing methodology and results.
Ensure human oversight. For consequential decisions, maintain meaningful human review capability. “Rubber-stamping” AI recommendations doesn’t constitute meaningful oversight.
The organisations that will navigate Australia’s evolving AI regulatory landscape most effectively are those that treat responsible AI practices as good business rather than compliance overhead. The regulations, when they arrive, will likely formalise practices that well-governed organisations are already following.