In this episode, part 2 of our informal trilogy on Customer Education instructional design, we discuss a few failed hypotheses around content development. These are things we’ve been asked to do, and we’ve all certainly tried in our careers, but they just don’t lead to effective learning. Tune in to hear more!

New Customer Education leaders, especially those who are “accidental” customer educators who get thrown into the role, don’t have strong backgrounds in content development. That leads to content that can please stakeholders but will fail customers.

Our goal as content creators should be to:

  • Make content memorable, so learners won’t forget it after they take our courses or read our articles
  • Drive actual behavior change, so learners will do something differently after they engage with our content

How do we often get this wrong? Read on.

Content development myth #1: The Sage on the Stage

The first myth we’ll tackle is the “sage on the stage” — the idea that if you put a really intelligent subject matter in front of learners and let them do their thing, then learning will occur.

Man In Suit Jacket Standing Beside Projector Screen
https://www.pexels.com/@mentatdgt-330508

A lot of the time, when you walk into an org with relatively immature Customer Education, you know this is happening because you have your product experts leading trainings, but those trainings are packed densely with content, and the expert runs through all of that without much interactivity. They’re basically delivering a lecture.

What’s wrong with that? Well, first of all customer education isn’t like school. In school, the professor can lecture however they want, and at the end of the day the student is responsible for studying and doing what they need to do to get the grade. And their GPA has long-reaching consequences for them.

That doesn’t work in Customer Education a lot of the time. Your customers don’t always have an intrinsic motivation to learn how to use your product. They’re not all naturally curious about it, so you can’t design your content as if they are.

And they don’t have an extrinsic motivation like grades. Unless your company has worked some provisions into the contract that if all their users don’t get certified, they don’t get to launch, or something like that, there’s no real motivation for them to struggle through bad training.

Instead the burden is on you to move from the “sage on the stage” to the “guide on the side”

How do you do that?

  • Reduce the curse of knowledge – experts don’t remember what it was like to use the product on day one, so you have to gut check their assumptions, jargon, and things that seem basic to them
  • Get constant feedback from customers about content and pace
  • Working interactivity into the training. Get them to reflect, speak up, ask questions, practice their skills
  • Pausing for questions – what questions do you have instead of any questions?
  • Measure the satisfaction of learners AND see if they’re doing what you trained them to do afterward

Content development myth #2: The content-first approach

So we’ve come up with a better model for how to structure interaction. But we still have to address another bad hypothesis — the idea that if a training includes all the important content that a customer “needs,” then it is a more valuable training.

Pen on to Do List Paper
https://www.pexels.com/@breakingpic

How do you know if this is happening? You have trainings that run through many topics in one sitting, and those topics don’t have much to do with each other. They are addressed to different audiences at the same time. And you probably are frequently running out of time, so you don’t feel like you have time to add in all those interactivities we talked about before.

Instead of taking this content-first approach, back up and ask yourself what you want your learner to be able to do after this training. And ask, is this the right time for them to learn this?

It’s not helpful for them to learn EVERYTHING about your product on day one. They simply won’t remember this stuff later. Back to the curse of knowledge — you know that Advanced Feature X is important, but if your learner isn’t going to go set it up during this training or immediately after, maybe don’t train on it in your 101 class.

Ironically, the more you include, the less likely people are to remember. At a certain point, you run into what’s called cognitive load. We can only hold so many things in our working memory at once. So including too much content will lead to cognitive overload and we just don’t learn what we’re supposed to.

So instead, start by asking what the behavior changes you need to happen AT THIS POINT are. Ask what are the skills you’d actually test the learner on. And actually test or certify them. Even if you don’t, use this as your standard for what to include. If it’s not important to test, it’s not important to teach.

I run into orgs all the time who have these bloated 101 trainings, but they don’t know what their intermediate or advanced courses should be. Chances are that intermediate content is actually lurking in the bloated 101 training. This stuff should be delivered later in the onboarding, or even post-onboarding, when the learner is ready to take action on it.

How to avoid these traps

When you’re beginning to create content for your Customer Education program, start by asking questions that will help you evaluate whether your style is a “Sage on the Stage” or a “Guide on the Side,” as well as whether you’re taking a content-first or learner-first approach:

  • How is training done today? Who delivers it?
  • Is the training engaging or interactive? How would you feel if you were the customer?
  • How many topics are covered, and does the content align with what the learner actually needs to do?
  • Are you measuring the learners’ reactions to your course?
  • Are you measuring behavioral outcomes from the course?

Answering these questions will set you on the right course to produce content that is memorable and drives meaningful behavior change.

Leave a Reply