Artwork for podcast Digital Dominoes
Will AI Lead Us to Prioritize Thinking Over Knowing?
Episode 47th November 2024 • Digital Dominoes • Angeline Corvaglia
00:00:00 00:20:09

Share Episode

Shownotes

In this episode of Digital Dominoes, host Angeline Corvaglia talks with Bill Schmarzo, a.k.a. the Dean of Big Data, about the responsible use of AI and big data. They focus on the impact AI will have on the way we educate younger generations, on how to think critically instead of just working to know and remember standardized facts and concepts. The conversation highlights practical methods for training AI to think critically, underscores the importance of teaching students how to think rather than what to think, and discusses the risks and benefits of AI, including the potential for emotional attachments. The episode also touches on the broader societal implications and the transformative potential of the widespread us of AI for education.

 

00:00 Introduction and Guest Welcome

00:38 Addressing AI Harms and Parental Concerns

01:14 Teaching Responsible AI Usage

04:46 The Socratic Method and AI

07:25 Emotional Attachments to AI

10:36 The Future of Education and AI

16:17 AI as an Enabler of Human Aspirations

17:59 Closing Thoughts and Positivity

19:08 Outro and Sponsor Message

Special thanks to Bill Schmarzo for taking time to be a part of this episode!

Follow him on LinkedIn: https://www.linkedin.com/in/schmarzo/

Episode sponsored by Data Girl and Friends

They are on a mission to build awareness and foster critical thinking about AI, online safety, privacy, and digital citizenship. They inspire the next generation to navigate the digital world confidently and responsibly through fun, engaging, and informative content. They focus on empowering young digital citizens and their families through engaging and informative learning.

Find out more at: https://data-girl-and-friends.com/


List of Data Girl and Friends materials about some of the topics covered:


Contact us for any questions or comments: https://digi-dominoes.com/contact-us/

Transcripts

Welcome to this episode. I'm your host, Angeline Corvaglia, and I'm here with Bill Schmarzo, a.k.a. the Dean of Big Data, and Data Girl and Friends very own Data Dean. I really love talking to you, Bill, because you're not only an expert on AI and big data, but you're also an endless optimist.

[:

I'd like to talk about in general AI. How can we As you say often, use AI [00:01:00] to help us against the harms of AI.

Well, there are a lot of things, by the way Angeline, thanks for having me. I know I love these conversations, they're always fun. You ask the correct questions and together we kind of hopefully move this, like I said, in a very positive direction.

Let's focus in on what parents can do and what we can do. We were talking ahead of time about how I'm teaching a class to a bunch of high schoolers tomorrow, about how do you use a tool like ChatGPT in a more responsible way. And there's a couple of things you have to do, right? First off, we're going to go through an exercise to determine the viability of small nuclear reactors

[:

They're going to load it into their chat, right? You can upload it. And then with that as a frame, they can now start asking questions. Referring back to, you know, using information I gave you, what are the benefits? What are the risks? What are the pros and cons? And then teach the students that as you're having this conversation, you are also training the tool.

[:

You ask it a question, here you go, ask it stuff, and you gotta, you gotta constantly not only ask it more questions, but you're trying to teach it. Teach that [00:03:00] research assistant what's important for the conversation you're having. Even as far as talking about, you know, what are the benefits from a society perspective? List all those and give me a rationale. What are the benefits from an economics? What are the risks?

So part of what I think we have as parents, and especially as teachers, is to not ignore and ban this tool, because if we ban it and don't teach people properly, the students are going to use it and they're going to start believing the stuff that gets fed back to them without being a doubting Thomas and asking questions and challenging it and saying, give me your rationale.

[:

Adults to kind of say if we're overwhelmed with something, as you just said, take all this [00:04:00] research, put it into your own to ChatGPT and ask it. A lot of adults would be overwhelmed with that. But we need to remember that the digital generation, even if they've never done it before, they can learn it in a second, or they can even teach each other.

So it doesn't need to be a limiter, right? The adult's role can be actually to teach or help children ask the questions. Because we don't need tech to understand how to ask questions, how to probe deeper into things, right? We, it's a tool, yeah? It's a means.

[:

Why? Keep asking why? Why? Keep making it go to that second and third level. Make it challenge itself. And that's part of what I think is a really useful framework. We're going to go old school. The Socratic method, right? Socrates taught that there are six questions that you have to ask in any situations around assumptions and perspectives.

his students to be dubious. [:

We're going to go old school, baby. We're going to become Socrates.

Perfect. That's a really perfect way of looking at it, because if you start out questioning, these chatbots… I have used Gemini, ChatGPT, and Copilot. Oh yeah, and I tried Replica this morning.

[:

Yeah, I almost said if you force it to think, which is a wrong way to frame it. If you force it to continue to do the next level of research, right? You continue to ask it to find more information. You continue to ask it, how accurate is that? And how do you base your assessment of accuracy? And what are your sources?

[:

So I can ask a question such as, what did I not consider? What did I overweight? What are other stakeholders? And then I also love the idea of taking a persona. You know, what if I was Socrates? How would I answer this question? What if I was Martin Luther King? How would I answer this question? And you start, you start using different personas to take and [00:07:00] attack us from different angles.

You are forcing that tool to consider more data in the context of the problem we're trying to solve. So the data about the Kardashians and what's going on in social media, it doesn't overwhelm it because you're not concerned with that. You're not concerned with this set of problems and you're making it find more data and heap on more learnings and more insights into that particular topic.

[:

I got this from Arnaud Engelfried, uh, just to say where I got that from. And I said, yeah, apparently the scientists are researching this now. Yeah. And obviously the, vulnerable population, which is obviously youth, is especially vulnerable to this. So how can we take what you just said and try to convince someone who is going to try out the AI chatbot in a relationship manner to consider doing that similar approach?

[:

Think about a lot of our problems in social media, on cable news networks. We have a lot of people who are very vulnerable to someone of authority stating something as a fact. And them accepting it. And then parroting that forward, right? Parroting is not intelligence. [00:09:28] As a society, we need AI data literacy is critical, but it has to be taught around how do we as humans make more informed decisions to improve our odds of making, of surviving and making the right decisions.

to memorize and regurgitate. [:

And now we're saying, no, no, that with Gen AI, the people who are going to be successful aren't the ones who can memorize and regurgitate. They're the ones who can think and apply. And that's a total flip. It's an empowerment. And I'm not just being told what to think. I'm having a chance to experience it, to explore it, to make my own rationale. Because we know that 1 plus 1 is 2, but beyond that, about everything else we've learned about history and society is always in the grey.

[:

I think that's a really good point, and I love that we're making it clear that in order to do something about the current digital situation and AI, the first steps are not online at all.

his talk about the future of [:

I mean, I would have used a chatbot. Anybody would do it. You don't have to pretend. and then she said, when I said that, she said, it's true, actually, that there's always going to be cheaters. There's always going to be people who take the shortcut, but if you tell them, If you take the shortcut, you're just not going to succeed in the long run in the world, the AI-filled [00:11:34] world.

y, imagination, exploration, [:

But we go through classes. We take standardized classes and standardized tests and standardized classrooms and standardized textbooks. Everything that we've done in our education system is all about standardization of memorization and regurgitation. And that's what makes humans so powerful, such a powerful capability, our ability to imagine, to be curious, to explore, gets thwarted.

[:

Right? And that Yoda is Your Own Digital Assistant. I'm making it work for me. I don't have to worry about the, that I've written over 500 blogs in my life. I can ask it a bunch of questions. It can go and find all this information, my 500 blogs, bring it together. I can challenge it. I can, I can make it do all that heavy lifting work.

[:

Is it a good idea? It's not a good idea, why? How do I make it better? So I think there's a really incredible chance for AI to make us humans more human. And to leverage that which makes us powerful entities. So I'm very encouraged by what's going to happen. Yeah, there's going to be pain points. Yeah, there's going to be people who cheat, who go around the edges.

[:

Education system failed them, and it's failing us if it doesn't refocus on making us humans more human and empowering us to challenge and be dubious and to use this tool, this Yoda, to help me nurture and fuel my curiosity and imagination.

[:

nd that's one thing that, that is easy to forget for people who aren't, who don't think about this and spend a lot of time on it, that it doesn't want to cheat you.

It's not that smart. It's not that smart. It's not going to teach you. It's only going to do what you teach it or train it to do. It's really not, it's not an evil force.

what you want to do and, and [:

And there was one thing I read just the other day. And I think this is so perfect for everyone who's kind of panicking about how AI and AI chatbots are gonna, and social media companies are going to completely destroy society. [00:15:32] And they were talking about how when women got the right to vote. At the time, there was the power structures that were working against women having the power to vote were much stronger than today's tech companies.

example of getting weekends [:

So it doesn't have to be anti tech, right? It can be tech on our side.

The beauty of AI is that traditional analytic models have made predictions based on the trends, patterns, relationships it saw in its historical data and just projected that forward. So all the biases in your historical data are amplified in your predictions, right?

[:

AI is different. AI allows me to define the variables and metrics around what I aspire to be. To put [00:17:00] those into the AI model, the AI utility function. So my AI models can factor in things like environmental concerns or community concerns or society concerns and fairness and ethical and responsible, right?

It can, the AI models can be constructed in a manner to do that. And historically, if your environmental data and concerns were not in your historical data, you were, you know, too bad. But now we have a chance to change that. So to me, AI is a very powerful enabler of enabling what we as humans aspire to be, not what we've been.

[:

I think that's [00:18:00] the perfect thought to close with. And I want anyone who's listening to really spend some time, no matter how pessimistic you might be at the moment, to believe this. Because we need to believe that change is possible and positivity is possible. Because the AI is only, as you say, what we teach it to be, and we can teach it to be positive. We can take back the narrative from people who only want to make money at the expense of other people's data.

[:

We're going to march. Yeah we are, absolutely. You're going to make this happen. We're going to band together worldwide. Yeah, because it's a worldwide problem, worldwide issue.

ys have great conversations. [:

kay, thank you. Then talk to you soon. Alright, cheers.

Please let us know what you think about what we're talking about in this episode and the others. Check out more about us and subscribe at digi-dominoes. com. Thank you so much for listening. I'd also like to thank our sponsor, Data Girl and Friends. Their mission is to build awareness and foster critical thinking about AI, online safety, privacy, and digital citizenship through fun, engaging, and informative content.

[:

Links

Chapters

Video

More from YouTube