Aka, our toolkit for the AI & Learning Design Bootcamp.

<aside> <img src="notion://custom_emoji/c403537e-8b27-41f5-996e-86393764cd01/25f8e93b-2c5d-80e8-a144-007af6039694" alt="notion://custom_emoji/c403537e-8b27-41f5-996e-86393764cd01/25f8e93b-2c5d-80e8-a144-007af6039694" width="40px" />

If you only adopt three AI tools…


1. Multi Purpose Tools

These are “general purpose” AI models which are built to deliver general assistance with any task.

The strength of these tools is that they are general purpose so you can ask them anything; the weakness is that they are general purpose and not 100% expert in any one thing, including Instructional Design.

TLDR - general purpose AI tools & models have specific strengths and weaknesses. Here’s my recommend list, with caveats for Instructional Design work:

Model What it’s best for (Instructional Design use cases) Caveats for ID work
GPT-5 Thinking (OpenAI) Document analysis and summarisation. While the reliability of GPT5 Thinking is greater than other models, all AI models can miss or misinterpret evidence—treat outputs as a first pass. Prompt for citations and verify claims with research tools; protect sensitive data.
Claude (Anthropic) Popular as a writing tool, and noted for being particularly good at “mimicking” tone of voice. Claude is great at writing, but sometimes fails to follow instructions (e.g. editing the content as well as the tone). Make it very clear what you do and don’t want it to do. Treat its output as a v1, check everything and ask it to explain if and how well it followed your instructions.
Perplexity – Pro Search & Deep Research Defining the “how”, i.e. scanning journals to find instructional strategies; gathering sources for a literature review on learning interventions; writing a quick evidence brief for a design decision. The quality of AI’s outputs depends on query precision and source selection. Follow citations and validate findings before adopting recommendations. Try the same search in multiple research tools and look for (and ask AI to reconcile) consistencies and inconsistencies.
xAI Grok (v3+) Pulling live industry examples for case studies; finding up-to-date references for training modules; supporting scenario-based learning with real-world data. Optimised for live web, not education—evidence quality varies by source and recency. Triangulate with research tools and verify before integrating into learning assets.
Google LearnLM (available within Gemini Pro 2.5) Rated highest for its “pedagogical knowledge” which means fewer errors off the bat for instructional designers. No AI model is perfect, and no AI models or tools understand what great looks like in Instructional Design as well as you.

Never over-rely on general AI models for pedagogical and instructional design expertise. If in doubt, refer to research tools to “define the how” (see below). |


2. Tools for Analysis

Think of these tools as “sense-making” tools built to collect, transcribe, code and crunch your inputs before you design anything.

The strength of using AI for analysis speed and breadth—you can triangulate interviews, analyse surveys and pick out themes from vast sets of system data fast.

The weakness of using AI for analysis is often the fact that the data that we have access to is incomplete, distributed and/or unstructured. It’s also true that many AI tools are far from perfect when it comes to data analysis — as ever, over-reliance can lead to mis-calculations, over-generalisation or missed context.

My tips on balancing the risks and benefits of AI for analysis go as follows:

Function Tools What it’s best for (Instructional Design use cases) Caveats for ID work
Defining the Problem & Solution — is it a training? Consensus, Perplexity Deep Research, STORM, Research Rabbit, Elicit Defining — what’s the root cause of this problem, and is the solution to this problem a training, or something else? The quality of AI’s recommendations will depend on you providing enough reliable input data about the problem that the business is trying to solve. All AI models miss data, hallucinate and misinterpret data—validate everything.
Designing Surveys & Interviews Consensus, Perplexity Deep Research, STORM, Research Rabbit, Elicit Defining the optimal methods for survey and interview design, to ensure you get the data you need to make robust design decisions. When asking “how?” always use these research AI tools trained on robust sources, not generic AI models which are trained on “the internet”.

Try multiple research tools and look for consistencies and inconsistencies in data. | | Analysing Text-Based Documents | Chat GPT 5 Thinking | Summarising SME interviews to define a learning gap; turning scattered reports, e.g. LMS & performance data, into a clear training need; analysing LMS logs to identify performance issues. | All AI models miss data, hallucinate and misinterpret data—validate themes with raw sources. Protect sensitive data. Automated coding = first pass only.

Many IDs use Notebook LM’s audio and video summarisation function to summarise content: beware of its shortcomings and check for gaps. | | Analysing Quantitative Data | Julius-AI + Chat GPT 5 Thinking | Analysing existing performance, LMS and other data to define a learning gap; turning scattered reports into a clear training need; analysing LMS logs to identify performance issues. | All AI models miss data, hallucinate and misinterpret data—validate themes with raw sources. Protect sensitive data. See AI’s analysis as a first pass only. | | Determining Requirements (v1 Concepts & Skills Mapping) | Consensus, Perplexity Deep Research, STORM, Research Rabbit, Elicit | Defining and mapping prerequisite knowledge and skills for a specific type of learner and goal, by getting to know the topic. Creating knowledge & skills maps for SME review. | Keep aligned with business goals and constraints. Use the output as a “start point” for SME interviews to accelerate (not replace) the collaboration process. | | Running Interviews | Granola, Fathom, Otter.ai, Voxpopme | Using AI to record (or even run!) interviews for you, and then immediately analyse and theme the results. | All AI models miss data, hallucinate and misinterpret data—validate themes with raw sources. Protect sensitive data. See AI’s analysis as a first pass only.

Work with research tools to analyse AI summaries for reliability: are the conclusions being drawn here reliable and defendable? |


3. Tools for Design

Think of these as “planning” tools that turn analysis insights into objectives, storyboards, flows and stakeholder-ready drafts.

The strength of working with AI on design work is rapid ideation and visualisation; the weakness is that outputs aren’t inherently pedagogically sound — it’s up to you to define the “how” and assess the quality of AI’s pedagogical decision-making.

That said, design tools shine when you give them direction, structure and examples. Here’s the TLDR:

Function Tools What it’s best for (Instructional Design use cases) Caveats for ID work
Brainstorming & Creative Ideation Gemini 2.5 Pro & Claude Opus 4.1 (esp. “thinking-16k” variant) Multiple studies confirm that these two models improve the range and quality of creative ideas (compared with not using AI). Use AI to generate innovative ideas to optimise for learner engagement and outcomes. Balance creativity, pedagogy and execution risk — always critically assess the likely impact of your creative approach on learner engagement and achievement and ask: is this feasible?
Instructional Strategy Definition Consensus, Perplexity Deep Research, STORM, Research Rabbit, Elicit Provide info on the learning goal, target learner and any practical constraints, and explore the optimal instructional strategy and approach, e.g. length, mode of delivery (online, in person, blended) and instructional approach (problem based, experiential etc). The quality of AI’s recommendations will depend on you providing enough reliable input data about the problem that the business is trying to solve, the learner, constraints etc. All AI models miss data, hallucinate and misinterpret data—validate everything.
Content Analysis & Summarisation (Text) Chat GPT 5 Thinking Summarising text documents, e.g. those provided by SMEs, to understand and define objectives, content etc. All AI models miss data, hallucinate and misinterpret data—validate themes with raw sources. Protect sensitive data. Automated coding = first pass only.
Content Analysis & Summarisation (Numerical Data) Julius-AI + Chat GPT 5 Thinking Summarising numerical documents, e.g. those provided by SMEs, to understand and define objectives, content etc. All AI models miss data, hallucinate and misinterpret data—validate themes with raw sources. Protect sensitive data. See AI’s analysis as a first pass only.
Objectives Writing Chat GPT 5 Thinking, Claude (plus Consensus, Perplexity Deep Research, STORM, Research Rabbit, Elicit) Drafting & sequencings measurable objectives from business goals, mapped to learner profiles. Generic AI models default to that which is common, not that which is optimal. Always research what great objectives writing and sequencing looks like, and then “teach” AI HOW to write objectives in your prompt.
Research & Content Creation Consensus, Perplexity Deep Research, STORM, Research Rabbit, Elicit Finding case studies and best practices to enrich and support training. Always use these research AI tools trained on robust sources, not generic AI models which are trained on “the internet”.

Try multiple research tools and look for consistencies and inconsistencies in data. | | Presenting | Gamma, Canva & NotebookLM (video summary) | Turning a high level design plan into a deck or video explainer for stakeholder review. | State the why as well as the what. Keep the decks short and simple but include the research you have gathered from research tools — this helps to accelerate the decision-making process.

AI will make errors: proof read everything before you share it with stakeholders. |


4. Tools for Development & Implementation