Artwork for podcast Where Parents Talk: Evidence-based Expert Advice on Raising Kids Today
Digital Literacy in the Age of AI and Misinformation
Episode 2438th November 2025 • Where Parents Talk: Evidence-based Expert Advice on Raising Kids Today • Lianne Castelino
00:00:00 00:34:02

Share Episode

Shownotes

How can parents teach kids to think critically, tell truth from falsehood, and navigate an online world filled with AI-generated content and misinformation?

In this episode of Where Parents Talk, host Lianne Castelino speaks with Matthew Johnson, Director of Education at MediaSmarts, about practical ways parents can build digital media literacy at home.

Discover how to talk to your kids about honesty, praise effort over results, and foster open communication in a digital age where AI, deepfakes, and disinformation are shaping childhood experiences.

Johnson shares practical strategies for parents to engage in ongoing conversations with their kids about their media consumption, ensuring that they feel comfortable discussing any issues that arise.

He also provides actionable techniques for discerning credible information, such as using curated sources and leveraging technology effectively. As families navigate this rapidly changing landscape, fostering critical thinking and emotional intelligence will be key to ensuring that children thrive both online and offline.

Takeaways:

  • The importance of effective communication with your children about their media usage cannot be overstated—fostering open dialogue is key.
  • Understanding the rapidly evolving nature of AI and its implications on academic honesty is crucial for today's parents and teens.
  • Teaching kids about consent and relationships in the digital age helps them navigate the complexities of social media interactions safely.
  • Promoting discipline in device usage and encouraging kids to focus on effort rather than just outcomes can significantly affect their emotional health.
  • Parents should actively engage with their children's media lives, creating an environment where discussing experiences with digital content is normalized.
  • A healthy balance between skepticism and trust in information sources is essential for developing critical thinking skills in young adults.

Links referenced in this episode:

Companies mentioned in this episode:

  • MediaSmarts

This podcast is for parents, guardians, teachers and caregivers to learn proven strategies and trusted tips on raising kids, teens and young adults based on science, evidenced and lived experience.

You’ll learn the latest on topics like managing bullying, consent, fostering healthy relationships, and the interconnectedness of mental, emotional and physical health.

Transcripts

Speaker A:

Welcome to the Where Parents Talk podcast. We help grow better parents through science, evidence and the lived experience of other parents.

Learn how to better navigate the mental and physical health of your tween teen or young adult through proven expert advice. Here's your host, Lianne Castelino.

Speaker B:

Welcome to Where Parents Talk. My name is Lianne Castelino.

Our guest today is Matthew Johnson, Director of education at MediaSmarts Canada's Bilingual Centre for Media Digital Media Literacy.

MediaSmarts has marked media Literacy Week each fall for the past 20 years to support Canadians in building the critical thinking skills needed to navigate an ever changing online world. Matthew is also a father of two teens. He joins us today from Ottawa. Thank you so much for taking the time.

Speaker C:

Oh, thanks for having me.

Speaker B:

So much goes on on an hourly basis in the world you live in, to say the least.

And I wonder, in the time that you've been doing what you've been doing and where we are right now, how would you describe the current state of digital media literacy in the average Canadian family?

Speaker C:

Well, the short answer is that we don't really know.

One of the real issues in Canada at the moment is that we don't have very good national data about digital media literacy, whether it is in terms of the general digital media literacy of Canadians of different ages, whether it's what is being taught in schools. We have some snapshots from our own research, from research that's been done by other organizations or by Statistics Canada.

We have some insight into some things relating to recognizing misinformation, managing privacy settings, things like that. We have a few of the puzzle pieces, but we really are missing the majority of the puzzle at the moment.

Speaker B:

As that is happening, the rise of all the different technologies that we know about, in particular AI, artificial intelligence, continues at a pretty rapid pace. What is, what would you say are some of the most important things that parents should keep in mind in general?

Speaker C:

What.

Speaker B:

Whether they know about AI, it's on their radar or not. But when it comes to AI and.

Speaker C:

Their kids, I think with kids, there are two main issues relating to AI. One of them is a relatively simple one to deal with. The other one is a lot more complex.

So the first one is helping kids understand ideas of academic honesty, helping them understand why it's not always better for us to take a shortcut when we're learning things or when we're doing work and help them understand the reasons why we shouldn't misrepresent the work of others, whether that is something that we copied off the Internet or whether Something. It's something that a chatbot wrote for us as our own.

And as is so often the case with making rules for kids, what's most important is communicating your values. So communicate to them not just that it's wrong, or tell them that you'll punish them or that the school will punish them if they get caught.

Help them understand how this is a betrayal of learning how they're. It's a disservice to themselves that's been found to be much more effective than trying to impose punishments.

And a big part of that, as parents, can be if we praise effort over results, if we don't necessarily pressure our kids to get the absolute best possible grade, but rather pressure them, not pressure, encourage them to work as hard as they can. And that starts very early, when there's a lot of people who've studied child development and parenting.

And one of the things that comes up again and again is the importance of praising and effort, of praising kids for the work that they've done for trying, rather than necessarily praising the result or particularly describing them as, you know, being smart or creative or things like that. When we do that, they fall into the trap of worrying about, well, if I don't succeed, will I not get that praise?

Will I stop thinking of myself as smart or creative if maybe I don't do as well as I wanted to?

Whereas if we praise effort, if we encourage them to work as hard as they can without necessarily stressing about the result and not talking about it as a feature of their character, then they're going to be much less inclined to take those shortcuts.

Speaker B:

So let's. Yeah, no, for sure. And. And what comes to mind when you describe that is one constant in parenting, obviously, is ages and stages. Right?

So when you think about how early some kids are exposed to technology, and then what are the appropriate times to have that conversation when you're talking about what you just described, you know, the betrayal, the. The reality versus the, the false narrative, all those things. When ideally, should these conversations start what age?

Speaker C:

I think they should start as soon as possible, and they're going to be different at different ages. So these are conversations that we start having early and that we keep revisiting in, in exactly the same way as we.

We really revisit the conversations that we have about other aspects of their lives.

You know, the conversations that we have with them about road safety are going to be different at different ages because they're capable of different things and their world is different when they're really young. It May be a matter of having them stay close to you and hold your hand when you're going park.

As they get older, it may be crossing the street before or looking before they cross the street on their own and so on like that. It's exactly the same thing. What they're capable of doing is different. Their world is different as they get older.

So very early on, we may not even be thinking about things in digital technology or media terms. We may just be, for instance, trying to make sure that we are appraising effort and encouraging them to be honest.

We're rewarding honesty from early on. We're rewarding open communication from early on.

Because that's one of the other things that is so, so, so important across every aspect of digital media literacy, is that your kids know they can come to you, that they feel comfortable talking to you if things go wrong or if they're not sure what's right. Because we do know when it comes to things like academic honesty, kids are often very confused. And AI makes it even more confusing.

I know with my kids, at the same time as their teachers are putting in all kinds of processes to try to discourage AI use or to prevent kids from being able to use it, they're also getting tutorials at the library or in assemblies on how to use AI tools.

And AI companies are paying millions of dollars to get their technologies into schools, or if they already have their technologies into schools, they're paying money to train teachers, to train administrators and school librarians in how to use them. So kids are absolutely getting mixed messages. They need to know that they can come and talk to you if they're not sure what's right.

Because we don't want to throw these tools out exactly the same as with our other digital tools. Just like phones, just like computers, just like social media, search engines, Wikipedia.

We really don't want to wind up in the situation that we were in with Wikipedia, where people were discouraged from using it rather than being taught how to use it effectively.

And so we had an entire generation of students who, first of all, either didn't rely, didn't turn to what has become an extremely useful tool and one of the essential tools for verifying other sources. But also, when they did use it, they didn't know how to use it effectively because they'd never been taught.

And so they didn't know how to tell the difference between a reliable Wikipedia article and one that may be less reliable. They don't know how to tell if a Wikipedia article is in the middle of an edit war. And so maybe you come back in a little while rather than using it.

Right now they don't know how to follow the links in a Wikipedia article if they want to double check that the references are being reflected accurately. And these are exactly the same skills that older kids are going to use when it comes to AI.

Speaker B:

What makes all of what you just described so uniquely challenging in many households and for many parents is that many parents themselves are caught up in this storm of disinformation, and it is difficult for them to discern what is real from what is not real.

So, Matthew, can you take us through some examples of false or misleading content and information that families are most likely to encounter on a regular basis?

Speaker C:

Sure. Again, we don't have great data on the specific topics, but we can make a broad picture.

We know obviously that news and political topics are a big part, particularly a big part of what adults see that is false or misleading, because one of the most common things that we see as misinformation is things that are real but are shared with misleading context.

So we've seen, for instance, during recent conflicts, footage from completely different wars or conflicts being shared with the claim that it is happening now in Ukraine or in Gaza or wherever.

We see that with natural disasters, where you will often see footage from a completely different natural disaster being spread read and that people are usually sharing these things just essentially for clout to get attention. They don't have actual footage of the new, or they don't have footage that people haven't seen.

And so they will share something that is false in terms of the context.

Health and wellness is another big one, and I would say that's probably the one that young people see most often that has the highest degree of misinformation.

We have a program called our Teen Fact Checking Network that's been going on for a little over a year now where we train teenagers in our break the Fake program for verifying information.

And then they find claims that they don't know ahead of time whether they're true or not, but that seem like they might be too good to be true or might be false.

And then they verify them and they make these terrific videos, usually about a minute and a half, two minutes long, that either confirm or debunk them and explain the steps that they took.

earing about that back in the:

And it's an example of how these things, you know, misinformation can persist literally for decades in this case. So the ones that they're encountering most often, I would say probably are relating to health and wellness.

And then you see definitely content that is shared to try to create a false narrative, not specifically about regular politics, but usually around things like race or sex or gender or gender identity or sexual orientation, where it may not even be trying to make a logical argument.

It's not necessarily something that can be confirmed or debunked, but where it is really creating a false impression through things like memes or videos that are pushing some sort of racist, sexist, or homophobic content.

Speaker B:

So two things come to mind as you lay all that out.

One is, you know, parents on average don't have the time or the, you know, know how some, in some cases, to do that research and sort of distill down to what is disinformation, misinformation and the actual truth. So how can that be addressed?

And also for parents who perhaps are still grappling with AI and deep fakes, and all these things just continue to grow each day and they're already starting behind the eight ball. So how can parents counter that? Are we now talking about a smaller number of trusted resources at the end of the day that families can go to?

Speaker C:

That's certainly one approach that we can take, and that is to curate resources that we already know that we can trust and turn to them. And there are a lot of things, a lot of skills that we teach that fall into that category.

So one of the very simple things that we teach people is the idea of using the News tab when you do a Google search, because the News tab is much more curated.

Now, I'm not going to say that you can absolutely believe everything that you see in every source that you might see in the News tab because they do include some strongly partisan sources, some sources that don't necessarily have a great track record. But everything in the News tab is a real news source that really exists. And already just by being that about 90% of the Internet is excluded.

And so you can count. You're not you.

When you're using the news source, the News tab, you want to look at a couple of different sources, ideally ones that you recognize that you already know, have a good track record. And that's really helpful for when something happens in the news.

When you hear something, particularly now, of course, here in Canada, when it's not actually possible to share Canadian news links on meta platforms like Facebook or Instagram, you're often going to get a news story shared with you without a link, without context. And so when that happens, when you're not sure, did that really happen, this news story, and am I getting a basically accurate data view of it?

That's when you do something like going to the news tab, where you can find out what are most news sources saying about this news story. And if you do find it's really only one source that's talking about it, then you're cautious. You wait to see if other sources confirm it.

Another thing that we can do that's pretty simple is make what's called a custom search engine. And anyone who has a Google account can do this.

I don't think it works, unfortunately, with student Google accounts, but anyone with a regular Google account can do it. And what a custom search engine does is you put in just the search, the websites you want it to search.

So if you have certain news sources, if you have certain sources of health information, or it could be just for finding out about sports news, if you want to.

If you have a certain number of sources that you go to regularly for sports news or whatever your interest is, it could be about Pokemon, it could be about Star wars, it could be about anything that you're interested in.

But if you already have a number of sources that you know you can trust, you want to be able to search those quickly without seeing all kinds of irrelevant sources. So you put those into the custom search, and after that you can do a Google search, and it will look at only those sources.

And that, again, is a matter of excluding things that are either going to be irrelevant or unreliable. So we've actually made a number of those. You can share your Google search, your custom search with other people.

We made one that searches 30 fact checkers all at once. We made one that searches only legitimate science sources.

And we made one that searches sources of information that are reliable but also safe for kids. What we call our school safe search.

And these are really helpful because, particularly with younger kids, so much of, of what we want to teach them about navigating the Internet is about limiting what information they can access.

And it's something that they want as well, because it makes it much easier to deal with all of the information on the Internet if you know that everything you're looking at is a reliable source. And for younger kids, if you know that it's going to be safe for kids that age.

Speaker B:

Along the same lines, when we talk about AI generated anything, images, information, etc. Are there any specific red flags or quick cues that parents and families and kids can turn to to discern real versus something that's AI produced?

Speaker C:

Unfortunately, there really aren't. And we really recommend against trying to judge with your eyes to determine whether something is real or or not.

And there are a couple of reasons for that. One is that it simply doesn't work anymore.

A couple of years ago, you could count on things like an extra finger here or there, or parts of body parts that merged between two people, or weird things with the text. But the current crop of image and even video generators are much better than that.

The other thing is that when you're determined to find something wrong with a photo or a video, you will find it.

There are always things that look weird, sometimes just by chance, sometimes because of the compression algorithms that are used to make these files take up less space, they can lead to weird artifacts. And we're always most susceptible to misinformation that we want to believe that we hope is true.

And similarly, we're more inclined to doubt things, things that we don't want to believe.

And so if you're looking at something and you're scrutinizing it really closely, if you don't want to believe it, you will find reasons not to believe it, even if it is true. So what we recommend is doing the same things that we've been teaching for a decade now, and that is look for external evidence with a deep fake.

The biggest question is, where did it come from? From? Or something that you think might be a deep fake, where did it come from?

Can you track it back to either a news source that you think is reliable, or a person who would have good reason to have had access to be able to take that photo or video. So right now there are all kinds of photos and videos going around about the, the storm that has hit Haiti.

And when you look at those, many of those are deep fakes, many of them are real. And so when you look at those, one of the things you can do is, first of all, has someone else verified it?

You can do a reverse image search using a tool like tineye.com and see have any real news sources run this photo. You can also look at the people who are sharing it. If they're sharing something that was published by someone else, track it back to the source.

And then how would they have access to that? Are they someone who lives there, or are they a journalist who is on assignment there?

If it's someone who has posted videos or photos like this from around the world in the last month, then you have good reason to think that they're probably not legitimate.

So it does come back to those same skills that we teach, what we call companion reading, but sometimes called lateral reading as well, that are all about looking for clues that are very hard to fake, even in the deepfake age.

Speaker B:

That is so interesting because when we talk about what makes all of this uniquely challenging, one of those things is critical thinking skills development and what that looks like in today's age when you're trying to navigate all of those things.

So as a parent, how can you go about teaching critical thinking as it relates to digital media, AI, etc, etc, while balancing, you know, having your child have healthy skepticism about things versus them trusting nothing?

Speaker C:

That is one of the biggest challenges in digital media literacy.

And we have seen that efforts to teach verification, when they focus on debunking, they do have that effect where they will encourage kids to debunk things that are fake, but they also will debunk things that are true, or they'll doubt things that are true. There's a terrific scholar named Mike Caulfield who's done foundational work in this field, and he coined the great phrase trust compression.

And that is when you are skeptical of everything, you trust or distrust all sources equally. So what we teach instead is discernment. It is not about debunking things.

It is about identifying what is the evidence that something might be true, as well as what is the evidence that something might be false. And the other really important dimension of that is intellectual humility. It's thinking about the fact that you might be making a mistake.

Now, as parents, we're in a great position to model intellectual humility because we can talk to our kids about times we have made a mistake or times we've changed our minds.

That's a really important part of it to make sure kids know it's okay to change your mind, that you can be wrong about something and then you get new information and you change your mind. We really have to model that for them starting from very early on.

We also want to teach them the idea of different kinds of expertise, that there is a difference between an expert in a category and a non expert. But they're also that different kinds of experts know different things.

So even very young kids, for instance, can understand that a farmer and a zookeeper and a vet all know about animals, but they know different things about animals because they have different jobs and their jobs require them to understand different things.

And that's introducing the idea that just because someone is a doctor, it doesn't necessarily mean that they have good advice about every kind of medicine. You know, that you shouldn't be necessarily listening to dietitian about vaccines as an ex, which is an example that we use in one of our resources.

So it is those ideas that there are relative degrees of trustworthiness, that again, you don't trust anyone 100%. Just because you verify that someone is basically trustworthy, that doesn't mean you take their word. Absolutely take their word for it.

That expertise tease is relative, that it's in different fields. And finally, that idea of intellectual humility.

And so we do have three quick questions that we encourage everyone of all ages to ask themselves before you start investigating something. And that is, first of all, what do I already think or believe about this? The second is, why do I want to verify or debunk this?

Why do I want this to be true? Or why do I hope this isn't true? And this third one is probably the most important. What would make me change my mind?

Because that third question is the difference between real critical thinking and the kind of critical thinking that conspiracy theorists do. In the case of a conspiracy theorist, of course, any new evidence that confirms the theory is valid.

Any evidence that doesn't confirm the theory is evidence of a cover up.

So you need to make sure that you've established for yourself the level of evidence that you would actually accept that would make you change your mind even before you start investigating something.

Speaker B:

Those are really great pointers and really simplify it for adults and kids alike. Definitely.

So, Matthew, when you look at sort of the big picture in terms of where we are with digital media literacy, where we are with AI right as we speak, do you believe that the gap is growing in terms of the knowledge that we should have about these things, the very things that mediasmarts teaches, or are we narrowing that gap in some way?

Speaker C:

I think some things are improving and some things less so.

So I do think in the last couple of years we've really made great progress with curriculum across the country with getting curriculum updated to reflect our modern networked media environment. As you may know, every province and territory does have media literacy in its formal curriculum.

Canada is one of the only countries that can say that and was one of the first countries to achieve that.

But in many cases, that curriculum to a large extent still reflects the old media environment where it was Very expensive and difficult to make and to distribute media. But we are seeing a lot of progress in the last few years.

Ontario in particular, has has integrated digital media literacy, so that it is keeping all of those essential close reading skills associated with what you might call classic media literacy, but also recognizing all of the things that we've been talking about today that are implications of the networked media environment, where I see we may be falling behind does relate to a large extent to AI in a couple of ways. First of all, as I said, I think people have learned the wrong lessons about it because those early tools did make so many obvious mistakes.

I think a lot of people learn skills that today not only don't work, but can be counterproductive, but also I think more seriously, when it comes to things like chatbots, there's poor understanding of how these actually work. Because as, as a species, we're inclined to treat anything as human that behaves at all like a human.

And that's been found going back 50 years, where even very simple computer programs, people would respond to them as though they were alive if they were programmed to speak conversationally. It's called the Eliza effect. And this was, as I say, very simple computer program certain 50 years ago.

And now, of course, we have these tools that really do interact with us as though they were human. They're very powerful in their ability to mimic a conversational response.

And what it means is that people don't think of them as machines, they don't think of them as computer programs. But one of the things that research has shown is that the better people understand how AI works, the more critical they are in their use of it.

And so this is one example where I really think we need to improve people's understanding of literally how these tools work so that they can understand that while they are tremendously valuable, And I think there is a lot of potential in AI tools, I don't want to throw the baby out with the bathwater. At the same time, if we think of them as being intelligent, we are going down a false path.

We have to understand these are very sophisticated prediction machines that when you ask them a question, they look in their training set in all of the content they've been trained on for other answers.

Other times people have been asked that question, they look for what they think is the most probable response to your question or your prompt, and that's what they give you. They do it on a word by word basis. They do it on a sentence by sentence basis. It looks a lot like thinking and speaking but it really isn't.

And if we understand that there's a lot of evidence that we use these more critically and more effectively as well.

Speaker B:

As so much of what you just outlined there, I think really does reside in what many families struggle with. And that is a generational divide in terms of understanding of digital media literacy within families. Right.

So a lot of kids have never known their lives without smartphones and have never known their lives with all the things that we're talking about, whereas their parents may be slower to adopting or understanding how these things work. So on that note, Matthew, what would you say is incumbent on parents today to think about as it relates to digital media and the world we live in?

It's not enough to say anymore, oh, I don't know anything about it, I'll get my, my son or daughter to take that over. Because we're living in a world now, as you've just outlined, that you can't really afford to let that happen with children for sure.

Speaker C:

Well, I think it's important obviously for us to understand the tools that we use and also to understand the tools that our kids use.

And it is also really important to recognize that kids pick up the basics of new tools very quickly, but often their ability to use them does not go past the basics. It can look impressive to us because we don't know how to use these tools at all, but it's often very shallow. And AI is a classic example of that.

Even more so than things like search engines or social media. They can get it to get produce a desired result, but they really have no idea in most cases how it actually does that. And again, that is key.

And our research actually found most young people, even up to teenage years, think that their parents know more about technology than they do. So they are looking to us for help, they're looking to us for guidance.

So it's a great opportunity for us as parents to be co learners with our kids, to explore these tools together, to have them show you what can you do with this and then maybe ask them to explain how that happened. And if they can't really explain it, or if that a bit that effort shows the limits of their understanding, then you can go out and learn together.

Speaker B:

If there's one thing that you hope parents take away from this conversation and action, or do differently or just even think about after this, listening to this conversation, what would that be?

Speaker C:

We always say it's essential to have an ongoing conversation with your kids about their media lives.

Part of that is co viewing when you're using media together, whether that's watching a movie or whether it is using digital media, being willing to pause when you want to address something. But part of it also is just making sure they know that they can come to you. So we encourage parents to set household rules around media use.

We've got 20 years of evidence that shows this really does make a difference in how kids behave.

And we always say the number one rule should be if anything ever goes wrong, if you ever see or experience something that makes you upset or you don't know how to deal with, come and talk to me. And my promise to you as your parent is I will not freak out. I will not immediately take away the device or the app or the game where it happened.

I will stay calm. I will listen to you, and we will find a solution together.

Speaker B:

Lots of incredibly important food for thought. Matthew Johnson, Director of education at MediaSmarts. Really appreciate your time. Thank you so much.

Speaker C:

Thank you.

Speaker A:

To learn more about today's podcast, guest and topic, as well as other parenting themes, visit whereparentstalk.com.

Links

Chapters

Video

More from YouTube