Have you ever watched a learning video and felt completely overwhelmed, even though the topic itself wasn't that complicated? That feeling of mental exhaustion is cognitive overload, and it's often the result of poor instructional design.
Host Matt Pierce introduces Cognitive Load Theory (CLT), a framework that explains how our brains process information and, more importantly, how we can design learning experiences that work with our cognitive limitations rather than against them.
Matt breaks down the three types of cognitive load: intrinsic (the inherent difficulty of the material), extraneous (unnecessary mental effort caused by poor design), and germane (the beneficial mental effort that leads to real learning).
Throughout the episode, Matt shares practical, actionable strategies that video creators can implement immediately to create videos that teach rather than overwhelm.
Learning points from the episode include:
Important links and mentions:
What if your best looking training videos are quietly making
Matt Pierce:learning harder and not because your content is
Matt Pierce:wrong, but because the brain is overloaded before the lesson can land?
Matt Pierce:Let's talk about how to fix that. Good morning, good evening, good afternoon.
Matt Pierce:Wherever you are and wherever you're watching from, welcome to the visual
Matt Pierce:lounge. Let's just dive in. Imagine a
Matt Pierce:postcard sized desk. Tiny, right? That's your
Matt Pierce:learner's working memory. Put a few items on it and
Matt Pierce:it's great. Stack on a few more and well,
Matt Pierce:things are going to start sliding off in learning. That desk
Matt Pierce:has two incoming channels. The auditory channel, the words
Matt Pierce:which I say, and the visual channel which the viewer sees.
Matt Pierce:If we pour in too much busy screens, dense
Matt Pierce:narration, duplicate text, things are just going to start falling off the
Matt Pierce:desk. That's cognitive overload.
Matt Pierce:Our job is to design so only the right items are on
Speaker:the desk at the right time. Today we'll use some
Speaker:proven multimedia principles to do three Eliminate the noise we add
Speaker:by accident. We're going to respect the brain's limit with smart
Speaker:pacing and find ways to guide the eye so attention
Speaker:lands where meaning lives. Now, extraneous
Speaker:load is mental effort. That doesn't help learning. It's clutter
Speaker:on the desk. If an element doesn't point to meaning,
Speaker:it's probably noise. So we want to start thinking about cutting
Speaker:decorative backgrounds, pop ups, extra toolbars,
Speaker:and sometimes our clever flourishes that really
Speaker:don't serve the current setup. A common
Speaker:overload pattern is actually reading and watching at the same
Speaker:time. So if you're using full sentence text on screen
Speaker:while narrating or over live interface, that's
Speaker:going to force the visual channel to read and track the demo simultaneously
Speaker:while the narration competes for attention. So here's what you can do
Speaker:instead. First, let your voice carry the why.
Speaker:Let the screen when you show it carry the what? And
Speaker:keep on screen. Text short and purposeful. You can think about
Speaker:labels, maybe keywords or steps, numbers.
Speaker:But put full sentence and captions on or transcripts, not full,
Speaker:floating over the demo and while scripting. Or you're going
Speaker:through your review process, here's three quick questions you can
Speaker:ask. Is this element necessary to understand this
Speaker:step? Is any text duplicating what I'm
Speaker:saying? Can a viewer tell instantly what matters
Speaker:on the screen? And if the answer to the last step is
Speaker:no, we'll solve it shortly by guiding the eye. Remember,
Speaker:everything has to fight for a place in your video. That's audio
Speaker:as well as visual as well as text.
Speaker:Now, intrinsic load is the complexity of the content
Speaker:itself. I can't make a 15 step
Speaker:workflow inherently simple, but I can make it
Speaker:learnable. How can we do this? Well, we could break the lesson into
Speaker:small meaningful clusters. Maybe two to five actions per
Speaker:cluster. And you can finish a cluster by giving it a brief
Speaker:breath. Then you can move on. If the platform
Speaker:supports it, you can let the learner's click continue, which I
Speaker:know gets a bad rap, but sometimes it might actually be really good. And if
Speaker:not, you can insert a short verbal reset. Phase one, record
Speaker:do the steps. Phase two, edit next small
Speaker:cluster. Phase three, export those two
Speaker:to four second resets. Let working memory file
Speaker:what it just learned before before the next items hit the desk.
Speaker:There's probably lots of ways to do this. Another one is you could just invite
Speaker:the learner to pause the video and let them take the breath that they need.
Speaker:Now, when terms or parts are new, teach the
Speaker:pieces before the process. 30 to 60 seconds of
Speaker:meet the parts plays off. For example, you'll use a
Speaker:timeline to cut the canvas to see changes and export to
Speaker:produce an MP4. That's all we need for today.
Speaker:Now, during the workflow, learners aren't decoding labels
Speaker:and following logic at the same time. Segmentation
Speaker:and pre training don't dumb down anything. They actually
Speaker:sequence complexity. So the tiny desk never
Speaker:overloads. Now, I realize I've covered a lot of vocabulary
Speaker:and I'm about to introduce a new term which you might be saying, oh my
Speaker:gosh, this is a lot of learning science, and it is, but it's good to
Speaker:know. Germane processing is the good effort, the mental work that
Speaker:builds a durable mental model or schema. Both are
Speaker:good terms and both well worth knowing. So we maximize
Speaker:our mental model or the good effort by
Speaker:directing attention and aligning our timing. You want to tell
Speaker:learners exactly where to look and why it matters, even
Speaker:in a talk to camera format. We can give you an example.
Speaker:For instance, if I were to say in the top right, the save button, this
Speaker:locks in your changes. That's going to make immediate sense to you.
Speaker:Now if I were to cut to screen occasionally, you could do something
Speaker:like a tight crop and a consistent highlight. Give it an
Speaker:arrow, an outline or, or halo or something. Use
Speaker:sparingly to add to that understanding the
Speaker:clarity that you're going to provide. One thing you can do as you show a
Speaker:step is say the action. As the action happens, you
Speaker:might hover a beat, then say the action and then perform it
Speaker:as it said. You know, words and visuals actually should
Speaker:Arrive together so your viewer doesn't have to hold one thing
Speaker:in working memory while waiting for the other. There is a
Speaker:slight option there that you can say, start moving the cursor, give
Speaker:it a little bit of lead so the eye starts following and then say the
Speaker:thing that you want to say. That's also an appropriate way. You just don't want
Speaker:to have such a big gap if the mouse goes up there to waiting. And
Speaker:you know you want to, you want to save the thing as it moves with
Speaker:it. So let's go through another
Speaker:example and just a pattern that you can follow along
Speaker:with. So it's pretty easy. It's going to get very repetitive quickly, but I think
Speaker:it'll help illustrate the idea. So let's say you set the target
Speaker:in two minutes, you'll know how to record trim and export. Perfect,
Speaker:right? Then next you can Pre train in 30
Speaker:to 60 seconds depending on what you're trying to show. Something like your
Speaker:timeline is going to equal cuts and your canvas is going to
Speaker:equal views and your Export equals your
Speaker:MP4. So you're setting them up, providing them with
Speaker:clarity of what each of the pieces are and then you can start
Speaker:to segment by your sub goals. So for instance, you might have several phases
Speaker:like phase one, record a small cluster of steps and add a
Speaker:one line recap. Then you'll move on to phase two, edit,
Speaker:which is our small cluster. And again another recap.
Speaker:And then you can move into phase three. And again you never have to mention
Speaker:which phase you're on to the learner. You just follow the pattern where you're
Speaker:talking about exporting, where it's a small cluster and a recap. And
Speaker:again you're going to want to signal and sync inside each cluster by naming the
Speaker:target clearly and saying the action as it happens.
Speaker:Now, at the very end of that video, you could close with the retrieval cue.
Speaker:This is a one sentence summary that restates the goal and
Speaker:the crucial step a lot, I know,
Speaker:but here's another, maybe more concrete scenario.
Speaker:So let's say that we've got a video that we're going to make. Again, about
Speaker:making an export of a video. Today's goal is simple. Record,
Speaker:trim, export before we start. The parts you'll use are the
Speaker:timeline for cuts, the canvas to see changes and and the
Speaker:export to produce your MP4. Now let's get started.
Speaker:First with recording, Start a capture stop when you're
Speaker:done and your clip appears in the project. For
Speaker:our next step, we're going to look at editing what you're going to do is
Speaker:find the pause, make two cuts, remove the gap and close
Speaker:it up. Now the last thing we need to do is export,
Speaker:choose an MP4 and confirm. The key idea is that
Speaker:you're explaining and showing things together. You don't need
Speaker:paragraphs on screen. You just want to focus on one thing at a time and
Speaker:help them move through the process seamlessly. If you
Speaker:have something going on, like a continuous monologue, insert a micro
Speaker:reset. Something like that completes phase one. Here's what you
Speaker:should have now then you can continue on. Maybe you're
Speaker:reading big paragraphs on screen while narrating. Gosh, that's
Speaker:a lot. Replace them with labels or keywords. Put full
Speaker:sentences again in captions or in the transcript.
Speaker:Vague references like it's up there somewhere. You would actually want to
Speaker:do something like name and locate top right, the save button.
Speaker:And if you cut to a screen, crop tight and use one
Speaker:subtle highlight. I'd also encourage you to increase the size of your
Speaker:mouse cursor so it's easy to see and it's always findable on your
Speaker:screen. Okay, let's start recapping, because that was
Speaker:a lot, right? So cut the clutter. If it doesn't support
Speaker:meaning, it steals attention. You want to chunk the
Speaker:challenge, teach parts first and group steps into
Speaker:small paced segments. And then you want to guide the gaze,
Speaker:be explicit about where to look and say the action as it
Speaker:happens. Now, your job isn't to merely make
Speaker:videos beautiful, it's to make them learnable. Control
Speaker:the visuals, control the timing, control the clarity and
Speaker:cognitive overload. It's going to happen at some point. You've probably experienced
Speaker:it as you've watched videos. Just remember that on the other side of
Speaker:your video is a human being who is trying to learn and trying to
Speaker:understand. Now, you might know them or you might not know
Speaker:them, but your goal is to help them regardless, to
Speaker:get through the complexity of whatever it is you're teaching.
Speaker:Here's a quick tip that we picked up a long Time ago
Speaker:@TechSmith is if you say something like, oh, this is an
Speaker:easy process, just click, blah, blah, blah. Guess what?
Speaker:It might be easy for you, but maybe not for them.
Speaker:And if there's a multitude of steps, you got a lot more steps, maybe more
Speaker:steps than even three or four. All of a sudden you've added this complexity
Speaker:that you really again, want to pull back on and be thinking about, how
Speaker:can I help them take this idea,
Speaker:this process that they're trying to learn. Maybe it's even Thought
Speaker:leadership and how do I help them move it into the learning
Speaker:kind of process? Again, we want to go back here as we wrap up
Speaker:and think about what is it that we're trying to do? Well, we're
Speaker:trying to make ideas, processes, all
Speaker:these things move from video
Speaker:into that working memory, into the long term
Speaker:memory so we can pull it out of the catalog, the library and
Speaker:into, you know, we have a good retrieval process. Video
Speaker:inherently is tough to do. It's tough to move from working memory
Speaker:into long term memory. So you need to be thinking about what are the things
Speaker:that are going to help to reinforce, to bring back up and so
Speaker:allow that person to give them time to put into long term memory, but also
Speaker:encourages them to use it enough that it sticks so well. This
Speaker:has been a lot, it's a little bit different of an idea. I wanted to
Speaker:do something as I've been thinking a lot about video creation from a learning
Speaker:perspective and here's what I'll
Speaker:end on. I think if you go through this, you're probably going to say, well,
Speaker:whoo, that was a lot. There is a lot of great research
Speaker:out there about cognitive load, about working memory, about
Speaker:the learning process, learning science. If you're looking for stuff related
Speaker:to visuals and multimedia, Richard Mayer is a great
Speaker:resource to search for. Dr. Richard Mayer. Jonathan Halls
Speaker:has some great stuff out there. If you are in the ATD ecosystem,
Speaker:you might know Jonathan Halls. He's written this book creating training videos.
Speaker:Uh, this one's about using smartphones, but it's got a lot of great backup on
Speaker:kind of learning sciences and, and getting started. Jonathan's been a guest on the
Speaker:show. Um, there's lots of great information out there and I hope
Speaker:this is just a taste to get you going so that you want to make
Speaker:better videos. You want to make better, more effective learning videos.
Speaker:Now the other thing I have to mention is that
Speaker:AI is going to play into this, right? You can take your
Speaker:ideas and pit them against AI and ask it to pull
Speaker:on the current research, ask it to look at things,
Speaker:to fact check to make sure are there better ways to move
Speaker:this through. I've just presented a series of ways, just
Speaker:some simple things. There's many more things that you can do.
Speaker:Lean to your AI, just don't remember, don't let it do everything.
Speaker:Be the human in the process because your ideas are fantastic
Speaker:and your learners will benefit from what you bring to the
Speaker:table in that human way. Especially if you're helping them
Speaker:to not get overloaded cognitively where that they
Speaker:can actually remember the things that they need to do and apply the learning that
Speaker:you're providing for them. Well, that's it. I hope that
Speaker:some of this hits home for, for some of you. I'd love to hear from
Speaker:you in the comments. You can always, of course, email us@the
Speaker:visualloungexmith.com we'd love to hear from you and we hope that you take a
Speaker:little time to level up every single day. Thanks,
Speaker:everybody.