Critical thinking with AI tools
This microlearning module focuses on how to think critically when using AI tools like Copilot, and why verification is a necessary part of working with AI
AI can be a powerful helper, but it is not a source of truth. This microlearning module focuses on how to think critically when using AI tools like Copilot, and why verification is a necessary part of working with AI. You’ll learn how AI can produce confident-sounding output that may still be wrong, outdated, or biased, and how simple verification steps help you avoid mistakes. The course shows how to assess risk, spot red flags, and use AI itself to support better, more reliable decisions in your daily work.
Learning Objectives
- Understand why AI output must be verified and why AI should not be treated as a source of truth.
- Recognize common risks in AI-generated output, such as hallucinations, bias, outdated information, and calculation errors.
- Apply critical-thinking techniques to assess AI output based on context, source quality, and risk level.
- Adjust the level of verification depending on whether the task is low-, medium-, or high-risk.
- Use AI tools to support verification by asking the right follow-up questions and cross-checking information.