Artwork for podcast Tangents with TorranceLearning
Episode 4: W.I.S.E. A.T. A.I.
Episode 47th May 2024 • Tangents with TorranceLearning • TorranceLearning
00:00:00 00:19:04

Share Episode

Transcripts

Meg Fairchild [:

Hey, Megan, let's do a podcast.

Megan Torrance [:

Great idea. What should we talk about?

Meg Fairchild [:

Okay, Megan, there's a lot going on. This is yet another podcast about artificial intelligence that we're adding.

Megan Torrance [:

Yes. And it's not just us. There's a lot of noise about artificial intelligence, and in part because it's cool, it's exciting, it can be exhausting, it's hard to keep up, and everybody wants a voice. And if I pull on my nerdy business cycle, macroeconomic hat, this is totally normal for this stage in any new innovations development. Lots of things going out there. Everybody's trying to get a piece of the pie. A lot of people doing things and trying out things. Not all of it is scalable.

Megan Torrance [:

Not all of it is applicable. So you'll see vendors talking about stuff that doesn't really have traction in organizations, or lots of cool pilot projects that don't actually happen for real. And you can expect at this point, and we're seeing this, a lot of startups, a lot of consolidations, a lot of failures. There'll be a lot of mix bag. We saw the same thing with xAPI, and it's not just technology. Right. It's. It's.

Megan Torrance [:

There's a lot of things. Um, I think a good analogy, if I could, is as states legalize recreational cannabis, all of a sudden every other store on the street sells cannabis, and every other billboard has a different way of talking to people about cannabis. And you'd think that's the only product out there.

Meg Fairchild [:

Yeah.

Megan Torrance [:

Right. And it's the same thing with artificial intelligence. So it's. It's kind of natural, but it's also kind of exhausting.

Meg Fairchild [:

So, do you know if organizations are starting to form strategies around this? Are people using it at work or mostly making up limericks and bedtime stories with their AI?

Megan Torrance [:

No, I think things are definitely maturing. Right. So early, like late 2002, early 2003, when generative AI kind of really became accessible to people with chatGPT.

Megan Torrance [:

Yes.

Megan Torrance [:

Tons of limericks, tons of silly things out there, and I'm not seeing nearly as many of those as much. So people are definitely. It's definitely maturing. I was actually at a conference, ATD central Iowa, had a professional development day, and I keynoted it and talked about AI in L and D. And at one point, I was totally just winging it, but at one point, I asked the group, 50 people there, I said, so how many of you are in organizations that has an AI strategy? And about 10% of the room raised their hands, and I said, so how many of you are in organizations in which your strategy is, don't do anything. And, you know, like, just stop, thou shalt not use it. And about 10%, ten, maybe 15%, raise their hand.

Meg Fairchild [:

Yeah.

Megan Torrance [:

And I said, so the rest of you, I said, do me a favor, raise your hand if you haven't really heard anything clear from your organization yet. And there's your bell curve, because the remaining 70% of the room raised their hands.

Meg Fairchild [:

Wow.

Megan Torrance [:

And it was just powerful. And we all got to see kind of like human infographing, right?

Meg Fairchild [:

Yeah.

Megan Torrance [:

And yet I'm reading survey research and kind of state of the industry stuff. Eddie Lynn has something coming out with the learning guild that'll be out in May on the state of AI use organizations and what he and others are finding. So I'm corroborating that that research is that L and D people are using AI, so their organizations may not have a strategy for it or have maybe said, no, thou shalt not use it, but people are using it. And that means that there's a big gray area. I just spoke with an organization, small company today, and a few people are using it, a few people are not. And it's still early days for many organizations.

Meg Fairchild [:

Right. So if organizations don't have a policy around it, they're not providing that guidance to their employees. What can we do to make sure that people are getting it right or using it in ways that don't cause harm or that are good and ethical?

Megan Torrance [:

Well, and that's exactly the thing that you and I were looking at months ago. Right. It was, how do we provide guidance? And it was surprising, you and I remember we were talking about just hearing about things that people were using AI for or missing in that conversation. Right. So that's where we created the WISE AT AI framework? And because I instruct identify as an instructional designer, I tend to make acronyms that help you remember things, or mimnomnics, which I can spell, but is a word that for podcasts I can never pronounce. But that's why we came up with WISE AT AI.

Meg Fairchild [:

Yeah. Tell me more about WISE AT AI.

Megan Torrance [:

Well, so it's a framework, right? And it's a framework that applies whether you're using chatGPT, or Midjourney, or Dalli directly. So I think of those as, like, direct to consumer AI tools. It applies whether you're using AI that's embedded in another tool. So, you know, our project management software, we use ClickUp, and it has all sorts of AI power ups. And Miro, our favorite tool has all sorts of AI power ups, right? And then it also applies if you're creating either learning experiences or tools or products for other people to use. And so I wanted to make sure that this framework applied across a bunch of different places. So WISE AT AI stands for something, of course. So the W is Wisdom, right? Wisdom in application.

Megan Torrance [:

Am I using AI for a purpose that is a solid and useful purpose? So that was a good start for us. Right. So am I asking AI to do things that we need AI for or useful and that it's actually good for?

Meg Fairchild [:

Which I would think, you know, part of your wisdom comes through application itself and practicing and seeing, oh, actually, it's not very good at this thing.

Megan Torrance [:

Very good layer. Yes, yes. Very, very good. And that's a lot of criticisms that I hear. People will ask it to do something that it's not good at and then get bad information, and they're like, well, that's kind of dumb. Ah, right. So that's, that's important. The next one is Inclusion.

Megan Torrance [:

Am I using inclusive inputs and making sure that the results I get from it are inclusive and free of bias? And this has had a lot of conversation because the large language models are trained on the Internet, which is created by humans, and humans are biased. Therefore the content they put on the Internet is biased, and therefore the content that chatGPT, or whatever has learned from is biased. And so it's important to always be on the lookout for that. And many of the large providers are on the lookout for that. I think the other thing, though, is that because these have been trained on the Internet, they've also been trained on large volumes of equity scholarships. So you can change your prompt to tap into identifying those areas of bias. So I have at times, in a conversation with chatGPT, then asked it to apply a DEI framework or act as though you are the DEI officer in an organization. What perspectives are we missing in this conversation? And I've gotten back, fantastic stuff.

Megan Torrance [:

So inclusion is really important. Right? So that's I in WISE AT AI. And next is Security. So security in terms of people's data, security in terms of organizations data, like you don't put the big Mac secret sauce recipe into AI. And security also has to do with knowing your providers and your software partners and what they're doing with the data and how they're the models that they're doing and that responsible use there, too. So there's a lot of pieces and parts that security can dig into. And that's really where a lot of when we get into the it side of AI. There's a ton going on there. E stands for Equity.

Megan Torrance [:

So WIS, and now we're to E. We see equity in a couple of dimensions. One is around the creators. Are we communicating to people? Do we have their consent to use their content as part of the AI model? And are we compensating them fairly for that in those contexts? And then do we have, are we equitably allowing users of AI systems to be able to opt in or opt out? And are we making AI systems available to everyone? Or are they only the exclusive property of certain people? So that's WISE.

Meg Fairchild [:

That's a lot. Yeah, I'm curious about the equity piece. And, you know, if we know that certain people are not being compensated correctly, how do I, in my little space, have any way to affect that?

Megan Torrance [:

That's hard, right? Because, and there are certain Anthropic has a model that is, in theory, designed more appropriately and equitably about the content that they scrape for the Internet. There's case law right now. So the New York Times has sued, or case law will be developing. New York Times has sued OpenAI for their use of their archive. I know as an individual, I probably can't influence any of those things, but I can influence what I ask AI to do. So I was recently reading an article about, I think it was Midjourney that was creating images that were gosh darn close to copyright and images, right. You know, Marvel superheroes or whatever. And, and it was possible to get Midjourney to create something that was really close to something that I know full well is a copywritten image.

Megan Torrance [:

And so for me personally, I just don't ask it to do those things. I'm just a tiny drop in a great big sea. But that's my part, right?

Meg Fairchild [:

Yeah, yeah. I guess one thing that rattles around in my head is like, what are the things that you're asking it to create for you that are a copyright image? It's spitting out something that's actually very closely based on another image that's copywritten, but you have no knowledge of that image, and so you have, there's no way for you to, like, check it.

Megan Torrance [:

All that these things are hard.

Meg Fairchild [:

It stinks.

Megan Torrance [:

It stinks. Yep. And I think the other thing around equity is to what extent do we choose intentionally to use human sources in order to maintain that equity, right. So for this podcast, right. We have chosen to ask Dean to create his custom music. And that was really important for us. And so there's lots of layers here. And I think one of the big important things is in a lack of black and white clarity or individual capability to influence a larger system.

Megan Torrance [:

I think simply having the conversation is a really important part. That's the first step that we can all have is have the conversation and the recognition.

Meg Fairchild [:

Yeah, actually I'm going to circle back about conversations because you and I were just having a conversation earlier today about when to use AI for learning projects. And so I'm thinking about this in terms of Wisdom in application. Would you want to create a learner Persona using AI? And we were saying, well, actually I might not want to start with that. I could certainly use it if I gather some information from my subject matter experts to make sure that I'm getting the information from them first on who my learners are, instead of relying on something, then presenting it to them and asking them to say, yep, looks good. Let's move forward with this learner Persona, knowing that if they're creating the content, it's going to be more unique to them and their situation versus if I just show them something, they might not think of all the other nuances that would automatically come up off the top of their heads.

Megan Torrance [:

I think that was such a powerful conversation for me because it helped really cement the fact that the value of that learner Persona conversation is multilevel. Part of it is there are things that only the client can tell us about their learners and that's super important. But I think now that we're having this conversation, the other thing is there's value in having the client think about their learners in a way that maybe they hadn't before.

Meg Fairchild [:

Right?

Megan Torrance [:

So. Yeah, right. So that, I mean, that totally gets to that wisdom thing. Totally.

Meg Fairchild [:

Yeah, totally.

Megan Torrance [:

And I think it touches on. Right. So if we have WISE AT AI, the at, right. The A in AT is Accountability, right. Taking accountability for what you do with AI and its results. And so I think that that's a piece there too. You can't just say, well, the AI made me do it or, you know, or. And that's maybe less on.

Megan Torrance [:

Well, no, I was going to say maybe it's less on generative AI, but if you get a bad answer from generative AI, you're still on the hook for delivering bad answers. But also in some of the classic conversations around recruiting systems and recruiting systems that are generating bad outcomes because of their biased training data. And it's like, no, the humans are accountable for those bad outcomes and that's important. And transparency, I think transparency operates at multiple levels but the big one is in making sure that people are aware when AI is being used, when their content may be used for training or when, if we were to build in, we have our Emma projects AI interface and case challenge practice, where we let people know that the feedback they're getting from that case challenge. Yes, it is amazing. It is nuanced, and it's contextualized and it's personalized, but it's also delivered by AI. And that's really important for us to say that. And we're using, you know, it's both the accountability and the transparency.

Megan Torrance [:

But for that particular product, we're using xAPI to store every student answer and every piece of feedback that the AI gives back so that we have that data as an audit trailer. And that's a big piece.

Meg Fairchild [:

I got one more example that I think relates to this. And so I was reviewing a prototype that Matt created the other day, and he had, I think it was GitHub's code creator, copilot. Copilot. There you go. He needed names of people, and it had created a whole bunch of names. Now, these names are probably, you know, your stereotypical american, probably white person names that were spit out. And he, you know, he caught it the other day and was like, yeah, you know, we need better names. We need a greater representation here.

Meg Fairchild [:

So that's like, that speaks to the accountability and the inclusion and all of that. Yeah, hugely so.

Megan Torrance [:

And so that's where WISE AT AI has been helpful for us. Kind of like. Kind of like a rubric. Right? And that's how I've used it a few times. Just like, how are we using this? And is that a useful way of going about it?

Meg Fairchild [:

Yeah, all the things that you need to think about along the way.

Megan Torrance [:

One easy memory aid.

Meg Fairchild [:

Okay, Megan, how'd that go?

Megan Torrance [:

I'm gonna say it went pretty well, but here's why. Because Dean, our producer, is a freaking audio ninja, and so all, by the time anybody listens to this, all of the flubs will be magically gone, and people won't even know that there were any in there.

Meg Fairchild [:

Yeah. And then he can, like, take this little piece here and splice it over there.

Megan Torrance [:

He makes us sound good.

Meg Fairchild [:

Yep.

Megan Torrance [:

Thank you, Dean. Thank you, Dean.

Meg Fairchild [:

This is Meg Fairchild and Megan Torrance, and this has been a podcast from Torrance learning.

Meg Fairchild [:

Tangents is the official podcast of Torrance learning, as though we have an unofficial one. Tangents is hosted by Meg Fairchild and Megan Torrance. It's produced by Dean Castile and Meg Fairchild, engineered and edited by Dean Castile with original music also by Dean Castile. This episode was fact checked by Meg Fairchild.

Links

Chapters

Video

More from YouTube