Skip to main content

Getting the Most from AI Features

A practical guide to writing better AI prompts and working efficiently with Mindsmith’s tools.

Written by Zachary Allen
Updated this week

How to Get the Best Lessons Out of Mindsmith

When you open Mindsmith and start a new lesson, you're not filling out a form or pressing a "generate" button. You're starting a conversation with the Mindsmith Agent — your AI co-designer. And like any collaborator, it does its best work when you tell it what you actually need. (Want to know how the agent works under the hood? Read The Mindsmith Agent.)

Here's exactly what helps — and what doesn't.

1. Your First Message Is the Blueprint

The first thing you type is the single most important moment in the entire process. It's the first thing the agent reads, and it shapes every decision it makes — structure, depth, tone, interactivity, everything.

What works:

"Create a lesson teaching new warehouse employees how to safely operate a forklift. Cover pre-operation inspection, basic controls, load handling, and common hazards. Include a scenario where the learner makes decisions in a realistic warehouse situation. End with an assessment."

The agent now knows the audience, the scope, the structure, the content areas, and the special elements you want. It can move fast and get it right.

What slows things down:

"Make a forklift training."

The agent will ask you questions before it starts — who's the audience? How deep should it go? Do you want assessments? That back-and-forth is time you could've saved with two more sentences upfront.

The rule: Write your first message like you're briefing a smart colleague who's never seen your project. Specificity is free and it pays dividends.

2. Answer the Agent's Questions — They're Not Small Talk

When your request is vague or the agent spots a gap, it will ask clarifying questions before it starts building. Things like:

  • Who's the target learner?

  • What should the assessment focus on?

  • Do you want it to follow your document's structure or synthesize freely?

These aren't filler. Each answer changes the lesson that gets built. If the agent asks whether your audience is "experienced nurses" or "first-year nursing students," that decision affects vocabulary, assumed knowledge, example complexity, and depth of explanation on every single page.

You can skip the questions — the agent will use its best judgment. But your answers are always better than its guesses because you know your learners.

3. Upload Your Materials — But Say What Matters

You can attach documents, PDFs, slide decks, and videos directly in the chat. The agent will read them, search through them, and ground the lesson in your actual content. But it works with what's there, and it needs your guidance on what matters most.

What helps:

  • Clean, well-structured documents with headings and logical sections

  • Videos with clear audio (transcript quality depends on it)

  • Context about what to focus on — "Use chapters 3 and 5, skip the rest"

  • Guidance on how closely to follow the source — "Stick to this document's structure" vs. "Use this as a reference but teach it your way"

What trips things up:

  • Uploading materials unrelated to what you asked for — the agent will try to use them and the result gets muddled

  • A 200-page manual with no guidance on what's in scope

Pro tip: If you have a 45-minute training video and want a focused 8-page lesson, say: "Focus on the onboarding process from the first 15 minutes and the compliance section near the end." The agent can find those segments, but your guidance saves a round trip.

4. Say Who's Sitting in the Chair

Target learner is the single most underused piece of context in AI lesson generation. Most people skip it. It changes everything.

"Senior managers familiar with company policy" produces a radically different lesson than "new hires in their first week." The agent adjusts:

  • Vocabulary — jargon vs. plain language

  • Assumed knowledge — what gets explained vs. what gets skipped

  • Example complexity — real-world edge cases vs. foundational scenarios

  • Depth — conceptual overview vs. detailed procedures

If you don't specify, the agent aims for the middle. The middle is nobody's ideal.

5. The Storyboard Is Your Highest-Leverage Moment

When you ask the agent to create a lesson, it doesn't jump straight to finished pages. It builds a storyboard first: title, learning objectives, and a page-by-page outline. Then it shows the storyboard to you and asks what you think.

This is the highest-leverage moment in the entire process.

Changing a page in the storyboard takes seconds. Reworking a fully generated page takes longer. So when the storyboard appears:

  • Check the objectives — are they what you intended?

  • Look at the page flow — does the structure make sense for your learners?

  • See a section that's missing? Say so.

  • See one that's unnecessary? Say that too.

  • Want a scenario moved earlier? A knowledge check added mid-lesson? Just ask.

The designers who get the best lessons are the ones who actually engage here. "Looks good" is fine if it actually looks good. But "swap sections 2 and 3, and add a matching activity after the vocabulary page" gets you exactly what you want.

6. Be Specific About What "Interactive" Means

The agent has a lot of tools in its kit — flashcards, tabs, accordions, timelines, process diagrams, branching scenarios, matching activities, sorting exercises, conversations, AI video, and more.

But "make it interactive" doesn't say which ones to use. Instead, try:

  • "Add a branching scenario where the learner practices a difficult conversation with a customer"

  • "Use flashcards for the vocabulary section"

  • "Include a sorting activity where learners categorize the risks by severity"

  • "I want a process tile showing the five steps of the intake procedure"

The more specific you are about the type of interaction, the better the agent can match the pedagogy to the content. Flashcards are great for recall. Scenarios are great for application. Tabs are great for comparing options. Each one teaches differently.

If you're not sure what to ask for, just describe the learning moment you want: "I need the learner to practice making a decision under pressure." The agent will pick the right format.

7. After Generation, You're Just Getting Started

The generated lesson is a first draft, not a final product. And this is where most people underuse the agent.

You're still in the same conversation. The agent remembers everything — your objectives, your source materials, your audience, the storyboard you agreed on. Just say what to change:

  • "Make page 4 shorter and more conversational"

  • "Add a branching scenario after the safety procedures section"

  • "Replace the quiz on page 7 with a matching activity"

  • "The tone is too formal — make it friendlier throughout"

  • "Add an image to the intro page that shows the facility layout"

The agent can make targeted edits without regenerating the whole lesson. It can add pages, remove pages, swap tile types, adjust content depth, regenerate images, and restructure sections. One message at a time.

The designers who produce the best work treat generation as the starting point and spend 5-10 minutes refining afterward. That refinement pass is where a good lesson becomes a great one.

8. Provide Objectives and Everything Aligns

If you provide learning objectives — even rough ones — the agent uses them as the backbone of the entire lesson. Here's what happens behind the scenes:

  • Every content page gets tagged to the objectives it teaches

  • Every assessment question gets mapped to the objective it tests

  • Question types get matched to cognitive levels — "define" gets multiple choice, "apply" gets a scenario, "evaluate" gets a case study

  • Every objective is taught before it's tested

If you leave objectives blank, the agent will infer them from your prompt, and it'll do a reasonable job. But you know your learning goals better than any AI can guess them. Even a rough list like "understand X, be able to do Y, evaluate Z" gives dramatically more to work with.

9. Tone and Style Aren't Cosmetic

"Professional but approachable" vs. "academic and formal" vs. "casual and encouraging" — these shape every bullet point, every heading, every piece of feedback on a quiz question.

If your organization has a specific voice, mention it in the conversation or set up a Writing Style in your org settings. The agent will match sentence structure, vocabulary level, and formatting preferences across every page.

If you don't specify, the default is professional. Which is fine — but it might not be you.

The TL;DR

Write a clear first message with scope, audience, and structure. Upload clean source materials and say what matters in them. Answer the agent's questions when it asks — they change the output. Engage with the storyboard before generation. Be specific about interactivity. And keep the conversation going after the first draft — that's where the real magic happens.

The best lessons on Mindsmith come from designers who treat the agent like a junior instructional designer on their team: capable, fast, and eager to do good work — but in need of a clear brief.


Ready to start? Create your first lesson or learn more about how the Mindsmith Agent works.

Did this answer your question?