About Black Mountain AI¶
Why We Built This Toolkit¶
The GovSafeAI Toolkit exists because we saw a gap.
Government agencies are under pressure to adopt AI, but the guidance available is often either too abstract (high-level principles without practical application) or too technical (ML engineering guides that assume expertise most agencies don't have).
We built this toolkit to be the resource we wished existed when we started working with government on AI:
- Practical over theoretical — Templates you can actually fill in, not frameworks you admire from a distance
- Honest about challenges — The "Hard Stuff" sections exist because we've seen too many projects fail for political and organisational reasons that no one talked about
- Australian context — Built specifically for APS requirements, legislation, and ways of working
- Open and free — Because good governance shouldn't be locked behind consulting fees
Who We Are¶
Black Mountain AI is a Canberra-based consultancy specialising in responsible AI for government and regulated industries.
We're named after Black Mountain — the landmark that watches over Canberra and the institutions that serve Australia. Like that mountain, we aim to be a reliable presence: grounded, enduring, and here when you need us.
What We Do¶
| Service | Description |
|---|---|
| AI Strategy | Helping agencies develop practical AI strategies aligned with policy requirements |
| Use Case Assessment | Evaluating whether AI is right for your problem (sometimes the answer is no) |
| Responsible AI Implementation | Ethics reviews, bias testing, governance frameworks |
| Capability Building | Training teams to work effectively with AI systems |
| Independent Review | Third-party assessment of AI projects and vendors |
Our Approach¶
We believe in:
- Pragmatism over perfection — Done well is better than perfect never
- Honesty over comfort — We'll tell you if your project is in trouble
- Building capability — We'd rather teach you to fish than sell you fish forever
- Public value — AI should serve citizens, not just efficiency metrics
The Toolkit Philosophy¶
This toolkit reflects how we think about AI in government:
1. Process Isn't the Point¶
Governance frameworks exist to reduce risk and improve outcomes — not to generate paperwork. If a process isn't helping, it's hurting. That's why we included the Anti-Toolkit section.
2. Politics Matter¶
Most AI projects don't fail for technical reasons. They fail because of organisational politics, unrealistic expectations, or stakeholder resistance. The Coalition Builder and Forbidden Questions address the human side of AI projects.
3. Consequences Cascade¶
Every decision creates ripples. The Consequence Simulator helps you think through second and third-order effects before they become problems.
4. Templates Are Starting Points¶
Every template in this toolkit is meant to be adapted. Your context is unique. Use what helps, modify what doesn't, and ignore what's irrelevant.
Contributing¶
This toolkit is open source. We welcome contributions from:
- APS practitioners who've learned lessons worth sharing
- Researchers working on AI governance
- Anyone who spots an error or has a better way
See our GitHub repository to contribute.
Get in Touch¶
Acknowledgements¶
This toolkit was developed with input from practitioners across the Australian Public Service. We're grateful to everyone who shared their experiences, challenges, and hard-won lessons.
We also acknowledge that we work on the lands of the Ngunnawal and Ngambri peoples, and pay our respects to Elders past and present.
Black Mountain AI — Canberra, Australia