Use AI Tools Responsibly¶
You have access to AI tools like ChatGPT, Copilot, or GovSafeAI. This journey helps you use them effectively while keeping yourself and your organisation safe.
- Never trust blindly - AI makes confident-sounding mistakes
- Never input sensitive data - Unless you're certain it's allowed
- Always verify important outputs - You're accountable, not the AI
- Know your policies - Different organisations have different rules
Before You Start¶
Check What's Allowed¶
Your organisation likely has rules about AI use. Find out:
- What AI tools are approved for use?
- What can and can't be input?
- What use cases are permitted?
- Who do you ask if unsure?
If in doubt, ask
Using unapproved tools or inputting sensitive data can have serious consequences.
Understand the Basics¶
What AI can do well: - Draft content you'll review - Explain concepts - Summarise long documents - Generate ideas and options - Help with coding - Translate languages
What AI does poorly: - Facts and citations (often invents them) - Current events (knowledge may be outdated) - Complex reasoning - Anything requiring real expertise - Understanding your specific context - Being consistent
Using AI Safely¶
What NOT to Input¶
Never input these
- Personal information about citizens or staff
- Classified or sensitive documents
- Internal strategies or plans not for sharing
- Passwords or credentials
- Proprietary or confidential information
- Anything you wouldn't want public
Before Inputting, Ask:¶
- Is this data allowed per my organisation's policy?
- Would I be comfortable if this appeared publicly?
- Does this contain personal information?
- Is there a safer way to get the same result?
Using AI for Different Tasks¶
Good practice:
- Start with your key points, ask AI to expand
- Have AI suggest structure, you fill in detail
- Use as first draft, heavily edit
- Check all facts independently
Caution:
- AI may generate plausible but wrong content
- Tone may not match your needs
- May include biases or inappropriate content
- Your name goes on it, not the AI's
Good practice:
- Use to understand concepts
- Ask for explanations of complex topics
- Summarise documents you've provided
- Generate search terms and questions
Caution:
- Never trust citations—verify them
- Knowledge may be outdated
- May miss nuances or exceptions
- Can confidently present misinformation
Good practice:
- Explain code and debug
- Generate boilerplate
- Suggest approaches
- Write tests and documentation
Caution:
- Generated code may have bugs
- Security vulnerabilities possible
- May not follow your standards
- Always review and test
Good practice:
- Generate lists of options
- Challenge your thinking
- Explore different perspectives
- Overcome blank page syndrome
Caution:
- May suggest impractical ideas
- Can reinforce biases
- Quality varies widely
- Still need human judgment
Getting Good Results¶
Prompting Tips¶
| Technique | Example |
|---|---|
| Be specific | "Write a 200-word summary for executives" vs "Summarise this" |
| Provide context | "I'm a policy officer in health. Help me..." |
| Specify format | "Respond in bullet points" or "Give me a table" |
| Ask for alternatives | "Give me three different approaches" |
| Iterate | "Make it shorter" or "Add more detail on X" |
Quality Verification¶
Before using any AI output:
| Check | Why |
|---|---|
| Facts | AI invents plausible-sounding facts |
| Sources | Citations are often fabricated |
| Logic | Reasoning may be flawed |
| Tone | May not match your needs |
| Completeness | May miss important considerations |
| Bias | May reflect or amplify biases |
When to Double-Check¶
Always verify when:
- The output will be seen by others
- Decisions will be based on it
- It contains statistics or claims
- It cites sources or references
- Accuracy really matters
- It doesn't feel quite right
When Things Go Wrong¶
Recognising Problems¶
Signs of AI errors:
- Very specific numbers or dates (often invented)
- Confident statements on contentious topics
- Citations that look almost right
- Content that seems too good to be true
- Inconsistent information within response
What to Do¶
If AI produces something wrong:
- Don't use the incorrect output
- Fact-check before proceeding
- Adjust your prompt and try again
- Consider if AI is right for this task
If AI produces something harmful:
- Don't share or use the output
- Report per your organisation's process
- Document what happened
- Consider if policy changes needed
If you accidentally input sensitive data:
- Stop immediately
- Report to your IT/security team
- Don't try to "undo" it—you can't
- Document what was input
Building Good Habits¶
Daily Practices¶
- Pause before inputting—is this data appropriate?
- Verify before sharing—have you checked the output?
- Attribute appropriately—be transparent about AI use
- Learn continuously—AI tools change rapidly
Questions to Ask Yourself¶
Before using AI output:
- Have I verified the facts?
- Would I defend this work as my own?
- Does this meet quality standards?
- Am I being transparent about AI use?
When NOT to Use AI¶
| Situation | Why |
|---|---|
| Legal advice | Need qualified professionals |
| Medical decisions | Potentially dangerous errors |
| Personnel matters | Sensitivity, bias risks |
| Security decisions | Too important to risk |
| Where prohibited | Follow your policies |
Learning More¶
Build your skills:
- Practice prompting techniques
- Learn from what works and what doesn't
- Share tips with colleagues
- Stay updated on new capabilities
- Attend training when available
Stay informed:
- AI capabilities change rapidly
- Policies update over time
- New risks emerge
- Best practices evolve
Quick Reference Card¶
✅ Do¶
- Use approved tools only
- Verify important outputs
- Think before inputting data
- Ask if unsure
- Be transparent about AI use
- Report problems
❌ Don't¶
- Input sensitive data
- Trust outputs blindly
- Skip verification steps
- Use for prohibited purposes
- Pretend AI work is entirely yours
- Ignore your policies
🆘 Help¶
- Policy questions → Your manager / IT
- Technical issues → IT support
- Security concerns → Security team
- Something went wrong → Report immediately