AI Use Case Identification¶
Template Download Template
- Time to complete: 1-2 hours
- Who should complete: Business owner with technical advisor
- Key output: Go/no-go decision on pursuing AI approach
- Next step: Business Case Template if proceeding
Section 1: Problem Definition¶
1.1 What Problem Are You Trying to Solve?¶
Brief Description (2-3 sentences):
Current State: - How is this problem currently addressed? - What are the pain points or inefficiencies? - What is the impact of not solving this problem?
Desired Future State: - What would success look like? - What specific outcomes are you seeking? - How will you measure improvement?
1.2 Who Is Affected?¶
Primary Users/Beneficiaries: - [ ] APS Staff - [ ] Citizens/Public - [ ] Businesses - [ ] Other government agencies - [ ] Other: _______________
Stakeholder Map: | Stakeholder Group | Role/Interest | Impact Level (High/Med/Low) | |-------------------|---------------|------------------------------| | | | | | | | |
Section 2: AI Suitability Assessment¶
2.1 Is AI Appropriate for This Use Case?¶
Answer these questions (Yes/No/Unsure):
| Question | Answer | Notes |
|---|---|---|
| Is the problem clearly defined and measurable? | ||
| Is there relevant data available (or can it be collected)? | ||
| Is the problem pattern-based rather than rules-based? | ||
| Would a human be able to perform this task given the same data? | ||
| Is the cost/effort of AI justified by the potential benefit? | ||
| Are there ethical and responsible ways to use AI for this? | ||
| Is there support from leadership and stakeholders? |
Interpretation: - Mostly Yes: Good candidate for AI - Mix of Yes/No: May be suitable but requires careful scoping - Mostly No: AI may not be the right solution - consider alternatives
2.2 AI Use Case Type¶
Select the category that best fits (check one):
- Classification/Categorization - Assigning labels or categories (e.g., document classification, risk scoring)
- Prediction/Forecasting - Predicting future outcomes (e.g., demand forecasting, risk prediction)
- Natural Language Processing - Understanding or generating text (e.g., chatbots, summarization, sentiment analysis)
- Computer Vision - Analyzing images or video (e.g., object detection, facial recognition)
- Recommendation - Suggesting options or next best actions (e.g., content recommendations, case routing)
- Anomaly Detection - Identifying unusual patterns (e.g., fraud detection, system monitoring)
- Optimization - Finding optimal solutions (e.g., resource allocation, scheduling)
- Generation - Creating new content (e.g., text generation, synthetic data, image generation)
- Other: _______________
Section 3: Data Assessment¶
3.1 Data Availability¶
What data exists or can be collected?
| Data Source | Description | Volume | Quality (High/Med/Low) | Accessibility |
|---|---|---|---|---|
Data Characteristics: - Is the data labeled (for supervised learning)? - [ ] Yes, fully labeled - [ ] Partially labeled - [ ] No labels (unsupervised)
- Is historical data representative of future scenarios?
- Yes
- Mostly
-
No - significant changes expected
-
Data format:
- Structured (databases, spreadsheets)
- Unstructured (text, images, audio)
- Semi-structured (JSON, XML)
- Mixed
3.2 Data Sensitivity¶
What type of data is involved?
- Personal information (PII)
- Sensitive personal information (health, financial, etc.)
- Classified or protected information
- Commercially sensitive
- Public/non-sensitive
Data Classification (PSPF): - [ ] OFFICIAL - [ ] OFFICIAL: Sensitive - [ ] SECRET - [ ] TOP SECRET
Privacy & Security Requirements: - Privacy Impact Assessment required? [ ] Yes [ ] No [ ] Unsure - Security assessment required? [ ] Yes [ ] No [ ] Unsure - Data sovereignty requirements? [ ] Yes [ ] No [ ] Unsure
Section 4: Technical Feasibility¶
4.1 Solution Approach¶
Preferred approach (select one or more):
- Build custom model - Develop and train your own AI model
- Use pre-trained model - Leverage existing models and fine-tune
- Commercial AI service - Procure cloud AI services (e.g., Azure AI, AWS AI)
- Open source tools - Use open source frameworks and models
- Hybrid - Combination of above
Rationale:
4.2 Integration Requirements¶
Where will the AI be deployed?
- On-premises infrastructure
- Australian cloud (onshore)
- International cloud (offshore)
- Hybrid
- Edge devices
Integration Points: | System/Platform | Integration Type | Complexity (High/Med/Low) | |-----------------|------------------|----------------------------| | | | | | | | |
4.3 Performance Requirements¶
Response Time: - [ ] Real-time (< 1 second) - [ ] Near real-time (1-10 seconds) - [ ] Batch processing (minutes to hours) - [ ] Offline (days)
Accuracy/Quality Requirements: - Minimum acceptable accuracy: ______% - Is explainability required? [ ] Yes [ ] No - Are errors tolerable? [ ] Yes [ ] No [ ] Depends (explain): __________
Section 5: Risk & Ethics Assessment¶
5.1 Ethical Considerations¶
Assess against Australian Government AI Ethics Principles:
| Principle | Assessment | Mitigation Actions |
|---|---|---|
| 1. Human, societal and environmental wellbeing - AI should benefit individuals, society and the environment | [ ] Low Risk [ ] Med Risk [ ] High Risk | |
| 2. Human-centered values - Respect human rights, diversity and autonomy | [ ] Low Risk [ ] Med Risk [ ] High Risk | |
| 3. Fairness - Inclusive and accessible; avoiding unfair bias | [ ] Low Risk [ ] Med Risk [ ] High Risk | |
| 4. Privacy protection and security - Protect privacy and security | [ ] Low Risk [ ] Med Risk [ ] High Risk | |
| 5. Reliability and safety - Operate reliably and safely | [ ] Low Risk [ ] Med Risk [ ] High Risk | |
| 6. Transparency and explainability - Clear and responsible disclosure | [ ] Low Risk [ ] Med Risk [ ] High Risk | |
| 7. Contestability - Provide mechanisms for challenge and redress | [ ] Low Risk [ ] Med Risk [ ] High Risk | |
| 8. Accountability - Clear responsibility and governance | [ ] Low Risk [ ] Med Risk [ ] High Risk |
5.2 Key Risks¶
Identify top risks:
| Risk | Impact (H/M/L) | Likelihood (H/M/L) | Mitigation Strategy |
|---|---|---|---|
Common risks to consider: - Bias and discrimination - Privacy breaches - Security vulnerabilities - Model drift or degradation - Over-reliance on automation - Lack of transparency - Regulatory non-compliance
Section 6: Business Case¶
6.1 Benefits¶
Quantifiable Benefits: - Cost savings: $________ per year - Time savings: ________ hours/FTE per year - Productivity improvement: ________% - Error reduction: ________% - Other: _______________
Qualitative Benefits: - [ ] Improved citizen experience - [ ] Better decision-making - [ ] Enhanced staff capability - [ ] Increased service quality - [ ] Regulatory compliance - [ ] Other: _______________
6.2 Costs¶
Initial Costs (One-time): | Item | Estimated Cost | |------|----------------| | Technology/software licenses | $ | | Infrastructure | $ | | Data preparation | $ | | Development/implementation | $ | | Training and change management | $ | | Total Initial Cost | $ |
Ongoing Costs (Annual): | Item | Estimated Cost | |------|----------------| | Software licenses/subscriptions | $ | | Infrastructure/hosting | $ | | Maintenance and support | $ | | Model monitoring and retraining | $ | | Staff time | $ | | Total Annual Cost | $ |
ROI Estimate: - Payback period: ________ months/years - Net benefit (3 years): $________
6.3 Alternatives Considered¶
Have you considered non-AI solutions?
| Alternative Approach | Pros | Cons | Why Not Selected? |
|---|---|---|---|
| Business process improvement | |||
| Rules-based automation | |||
| Human-in-the-loop only | |||
| Other: _____________ |
Section 7: Implementation Readiness¶
7.1 Capability Assessment¶
Current capability (rate 1-5):
| Capability Area | Rating | Notes |
|---|---|---|
| Data science/AI expertise | /5 | |
| Technical infrastructure | /5 | |
| Data management | /5 | |
| Change management | /5 | |
| Governance and oversight | /5 |
Gaps and Actions:
7.2 Dependencies & Prerequisites¶
What needs to be in place first?
- Executive/leadership approval
- Budget allocation
- Data access agreements
- Privacy impact assessment
- Security assessment
- Infrastructure provisioning
- Staff training
- Vendor selection
- Other: _______________
7.3 Timeline Estimate¶
Estimated Phases:
| Phase | Duration | Key Activities |
|---|---|---|
| Planning & Design | ||
| Data Preparation | ||
| Model Development | ||
| Testing & Validation | ||
| Deployment | ||
| Total |
Section 8: Recommendation & Next Steps¶
8.1 Go/No-Go Recommendation¶
Based on this assessment:
- Proceed - Strong candidate for AI; move to detailed planning
- Proceed with Caution - Viable but significant risks/gaps to address
- Further Investigation - Need more information or proof of concept
- Do Not Proceed - AI not appropriate; pursue alternatives
Rationale:
8.2 Immediate Next Steps¶
Priority actions (next 30 days):
8.3 Approvals Required¶
Sign-offs needed:
| Role | Name | Approval Date |
|---|---|---|
| Business Owner | ||
| IT/Technical Lead | ||
| Privacy Officer | ||
| Security Officer | ||
| Executive Sponsor |
Appendix: Additional Resources¶
Related Templates: - Privacy Impact Assessment Guide - Security Assessment Checklist - Business Case Template - Risk Register Template
References: - Australian Government AI Ethics Framework - APS Digital Service Standard - Protective Security Policy Framework - Privacy Act 1988
Support: For assistance with this template, contact: [GovSafeAI Team]