This week Sharona and Boz share some ideas, tools and methods for making assessment writing easier including AI, CheckIt (programmable assessment generation) and intentional alignment of content and assessments.
Links
Please note - any books linked here are likely Amazon Associates links. Clicking on them and purchasing through them helps support the show. Thanks for your support!
Resources
The Center for Grading Reform - seeking to advance education in the United States by supporting effective grading reform at all levels through conferences, educational workshops, professional development, research and scholarship, influencing public policy, and community building.
The Grading Conference - an annual, online conference exploring Alternative Grading in Higher Education & K-12.
Some great resources to educate yourself about Alternative Grading:
Recommended Books on Alternative Grading:
Follow us on Bluesky, Facebook and Instagram - @thegradingpod. To leave us a comment, please go to our website: www.thegradingpod.com and leave a comment on this episode's page.
If you would like to be considered to be a guest on this show, please reach out using the Contact Us form on our website, www.thegradingpod.com.
All content of this podcast and website are solely the opinions of the hosts and guests and do not necessarily represent the views of California State University Los Angeles or the Los Angeles Unified School District.
Music
Country Rock performed by Lite Saturation, licensed under a Attribution-NonCommercial-NoDerivatives 4.0 International License.
89 - Assessment tools
===
Boz: So if we're talking about test generation and we're talking about trying to come up with multiple versions and maybe 'cause a big thing with K 12 is differentiation, can we use, can we use AI to help us do this?
Sharona: And a year ago I would've said, nah, this stuff is junk. Well, it's not junk anymore. It is not junk anymore.
Boz: Welcome to the grading podcast, where we'll take a critical lens to the methods of assessing students', learning from traditional grading to alternative methods of grading. We'll look at how grades impact our classrooms and our student success. I'm Robert Bosley, a high school math teacher, instructional coach, intervention specialist, and instructional designer. In the Los Angeles Unified School District and with Cal State LA.
Sharona: And I'm Sharona Krinsky, a math instructor at Cal State Los Angeles, faculty coach and instructional designer. Whether you work in higher ed or K 12, whatever your discipline is, whether you are a teacher, a coach, or an administrator, this podcast is for you. Each week, you will get the practical, detailed information you need to be able to actually implement effective grading practices in your class and at your institution.
Boz: Hello and welcome back to the Grading podcast. I'm Robert Bosley, one of your two co-hosts, and with me as always, Sharona Krinsky. How are you doing today, Sharona?
Sharona: Well, I am doing well, but we are recording this very early in the morning, and I've been up for several hours because I'm a little jet lagged.
Boz: Oh, really? Why are you jet lagged?
Sharona: Well, I think I might have mentioned last week on the pod, I had a really amazing week. I was in New York this week talking about grading reform. And I was operating on a sort of normal school schedule, which meant I had to be at the schools that I was gonna be talking to at 7:30 in the morning Eastern time. And I live in California. Well, but the good thing is because I'm used to getting up, so I was getting up at like 6:00 AM Eastern, which is 3:00 AM Pacific time. That's good. 'cause I also had to get up this morning early because I'm also giving a presentation in a couple of hours after we're done recording. So between the two, I don't know what time zone I'm in, what planet I'm on. I'm a little spacey.
Boz: So you mentioned that presentation you're giving here in a couple hours. What is that about?
ughter mathematics story from: of the group of people in the:Boz: Yeah. And that's, I mean, especially if you think about it, 'cause your mother has a doctorates in mathematics, like actually in math, not.
Sharona: Well actually the PhD is in education with a cognate in math. The master's is in pure math.
Boz: She got these in the: e got her master's in math in:Boz: Yeah, I mean talking about getting a bachelor's and a master's in mathematics in a time where it was still not an uncommon belief that the women's brain could not handle the logical nature of mathematics.
Sharona: Exactly. And what's interesting about this particular presentation, because it's about mentorship. My sister and I have memories. There was a, my sister I think was in a classroom that my mom was teaching in, so my mom always taught, well, primarily taught in college once we were born, and my sister was, I don't know, six or seven, and in the back of the classroom that had chalkboards both in front and behind. And so my sister was copying on the chalkboard in the back, what my mom was doing in the front.
And so my mom was the only one that could see what she was doing. So she was trying really hard not to laugh because my mom is teaching like calculus and my sister who could barely write is trying to copy the symbols on the back of the room. But we definitely grew up in a household where math and math problems and mathematical thinking was just pervasive. And there are a number of points along my own journey where some accidental things happened that ended up being huge mentorship moments between me and my mom. So she's really the reason I do what I do.
Boz: Yeah, that's, that's really cool. And that's really cool that, you know, you're getting an opportunity to, to do this you know, for, for Women's History Month, but to really showcase the influence and, all the things that your mother did. So that's really cool. I'm, I'm very happy for you to get to do that.
Sharona: Yeah, I'm excited. I'm also excited because I gave, not the same talk, but a similar talk a number of years ago and I thought I had lost the recording of that talk. Yeah. And I recently found it, and unfortunately, my mom is still with us, but both of my parents, I, I still have around, but they are aging rapidly at this point, and they're both significantly diminished from where they used to be. And so to be able to recover that recording and to talk about my mom in her heyday is just such a blessing. And I wanna thank Shanna Dobson at Cal State LA who helped make this possible. And all of the people at Cal State LA who are enabling me to give this talk today. So shout out to them.
Boz: That's, that's very cool. I hope it goes well. I'll, I'll hope so too. Look forward to hearing from you later on to hear how it went. Yeah. But you also said something about you just got back from New York.
Sharona: And you're still talking to me, which is amazing because you were really mad that you didn't get to go with me.
Boz: We've been lucky, at the Center for Grading R eform, that we've got some recognition and we've had some groups reaching out wanting to do PDs. And because of both of mine and your main like real jobs, you have more opportunity to do those than I do, so, yes.
Sharona: Well, I have more flexibility to travel. You're getting the same opportunities. You just can't take them.
Boz: Well, I, I, I, yeah, that's what I meant, but I'm not mad at you. I'm definitely a little bit jealous that you're getting to do some of these traveling and I'm not.
Sharona: And especially because some of the ones we've done recently, and certainly the ones the this week, are K through 12. And you're the director of programming for K through 12. I'm the director of programming for higher ed, so we're gonna have to work on getting you some more flexibility or anyone who wants to have us come talk during the summer, much better. But yeah, I mean it was an incredible couple of days. And it's just really interesting to see what people are grappling with and you also are grappling some with some stuff.
Boz: Well, when we were kind of thinking about this episode and what we wanted to talk about today, something that you and I have both been dealing with a lot recently, and you've talked about it quite a bit with your new role at Cal State, one of your big things is doing all this kind of test generation for these coordinated classes. Yeah.
But I also have a new job, and one of the roles of the new position that I started back in November, besides the obvious coaching and working with teachers and working with PLCs, is we also get put on these side projects. And the two that I've been on, one of which is complete, one of which is not, is putting together the small group instruction project. And my task in both of them has really been working on the SBAC, which is the big state tests that we take in California, a common core state, for K 12. Our big end of the year state exams. And my role has been to produce review stuff. So basically test question generation, which you have also, like I said, you've been dealing a lot with, and that's something that keeps coming up in anyone's first introduction to alternative grading 'cause it is based on assessment.
Sharona: Well, and that's the thing is that when you look at grading in general, one of the things that we discuss when we do our introduction to what's wrong with current grading is people talk about they want grades to be a measurement of learning, and then when we start exploring the math of the whole thing, it quickly breaks down and says that it's not a measurement of learning.
That when we say, oh a b, a student should know this, that, or the other. And then when we go and we do the math, it doesn't actually happen. So I'm able to get people to the point of understanding that traditional points and percentages grading is essentially almost random relative to learning. I mean, it completely breaks down. But when you break that down for people, you leave this gaping hole of how do you then measure learning? That leads directly to assessment.
Boz: Yep.
Sharona: I mean, that is the thing, right? Because if you're gonna measure learning, you need a tool for measurement. That's the whole thing with like statistics, validity, and precision of your measurement.
Boz: Exactly.
Sharona: Apparatus.
Boz: So first, before we kind of get into that, what makes a good assessment question? A good assessment question, I. Well, so before we start talking about how we're able to do this,
Sharona: I wanna take it a step even back from that, so that we don't lose our non-test based classes.
Boz: Okay.
Sharona: What we're really talking about when we go to the four pillars is we're talking about gathering evidence of learning. So that's even a step back from specifically a specific question or specific assessment. We wanna gather evidence of learning. What are we gathering that evidence of learning on?
Boz: And It does not have to be assessment. It does not have to be your traditional quiz or test, like there are lots of ways of collecting evidence of learning.
Sharona: Right. So I just wanted to kind of throw that out there to say we're trying to gather evidence of learning. That's still a measurement, and in my mind, that's still assessment. It's just not necessarily a test.
Boz: Yeah. Which is what a lot of, when you say the word assessment, like you and I, when we use the word assessment, we in our minds aren't thinking just test. But a lot of times when you say the word assessment to someone, that's where they're going. While in New York, you did a training for some schools in Florida about,
Sharona: no, actually, Michigan. That was Michigan.
Boz: Michigan, sorry, sorry, Michigan. Yeah.
Sharona: No, we're working in Florida. We're working New York. We're working in Michigan.
Boz: But you were doing some, working with some schools in Michigan on authentic assessment in the day of AI. And you weren't talking necessarily about doing tests. These were more projects and, and larger, not your typical do this math problem or define this science phenomenon.
Sharona: Right. So what we talk about with the four pillars is what we want to be done, if you're being authentic to the four pillars, is you want to give students multiple attempts to demonstrate that they have learned something. And what that something is that they're supposed to have learned is some of your learning outcomes. Right? Yeah. So if we're gonna talk about any kind of assessment, any kind of measurement of learning, you have to be clear on what it is you're measuring. And that's actually the very first step. And it's more complicated than you might think.
I, I made back in, you know,:Sharona: And how many times this semester have you asked me why I am writing a certain test question? Because I am not currently working in an authentically aligned four pillars based grading system. And every time you do, I kind of glare at you because you're correct. And I hate how often you're correct. And yeah. So anyway, so for the purposes of this particular episode, if we can agree that when we say assessment, we are probably gonna talk quite a bit about actual tests. Because one of the things we're gonna talk about in K 12 is the state assessments that are given, which are usually in some sort of a testing format. But we really mean we are gathering evidence of learning on learning outcomes. That's, that's what we mean with everything we're gonna talk about right now.
Boz: Yeah. Although the particular issue that comes up really is that traditional type of test problem generation, because that's where, when people hear about some of our recommendations and some of our different views of grading students learning and how to collect that, the first thing that comes up is I've gotta do how many tests?
Sharona: Well, exactly. It sounds like a lot of work. It sounds like, when people hear what we recommend and we're like, you need to write outcomes aligned assessment, and you have to give students multiple attempts at showing mastery. First of all, of course, as we've talked about before, multiple is not unlimited. People seem to hear multiple and unlimited, and I'm like, no. I mean like three, maybe four chances instead of one. There's a lot of room between one and infinite. Yeah.
But yeah, I mean, a lot of our work in education, at least in the disciplines that you and I primarily operate in, but even many others, still rely heavily on some form of a timed test. Yeah. So couple things came up for me this week when I was in New York. One of the things that's different in New York, although from what I hear it's changing, than California, is they have the New York Regents exams. And those are actually graduation requirements. So our tests in California, our state tests, you don't have to be at any given level on the test to graduate. Is that a correct statement?
t after I started teaching in: w up in California. So in the:Boz: Yeah.
Sharona: My dad worked on that program at Hughes Aircraft, so I always thought it was a kick when I got a test question about a satellite system that my dad had worked on. So I just remember that because that was fun.
Boz: Well, that, yeah, we had those state tests, but they weren't part of like graduation requirements mm-hmm or promotion requirements from one grade to another.
Sharona: Right. Now New York still does, but they are discussing changing or getting rid of those. But one of the problems that they're having, and this is again what comes up, because the first conversation I was having, the main conversation I was having this week, was not transitioning to alternative grading. It's what's wrong with traditional grading. So you and I have that talk grading as the misuse of mathematics and the measurement of student learning. And one of the things that I don't know how public it is, but the schools certainly know it, is we're seeing GPAs go up and test scores go down. Like in many, many, many, many places.
Boz: Yeah. This is actually I think we might have talked about it but there's been a couple of articles. It's a phenomenon that's been going on for a little while, but seems to have gotten a lot of attention since pandemic.
Sharona: Well, and we talked about it last week on our unearned grades episode.
Boz: Yeah. I mean, but that's, I don't know if we specifically talked about this phenomenon that's been going on for, like I said, the last five, 10 years, where it's, we really have seen increases in overall GPAs, not in just LAUSD or in California, but across the country while test scores at the same time have been going down.
Sharona: So when we do this talk, like everyone's like, yeah, we can totally understand this because there's no connection. So then people are like, well, what do we have to do? How can we change? And that's when we start getting into these assessment questions.
What have you been dealing with? And then I'll talk a little bit about what, well, what, or do you wanna talk about what we've been doing to address this issue with our programs that we work on together, and then we have things that we're doing individually as well.
Boz: Yeah. Well, first, I want to recognize and acknowledge that, especially if you are in a assessment or a test quiz type assessment heavy class, like our statistics class that we teach, majority of our learning targets, the evidence that we use is coming from assessments, coming from a traditional type of test, even though it's.
Sharona: Test or quiz. Yeah.
Boz: Yeah. And that, I guess that is an issue because, one of the first kind of knee jerk complaints or pushbacks against doing any kind of this grade reform is that means you've gotta write a lot more tests.
Sharona: Well, there's that. And also the other one is you're gonna take up a ton of time in your class, with all these tests and the other thing people say to me is, so do you mean we shouldn't use tests? We should do like other things that are outside of class and isn't that gonna increase cheating?
So like all of these assumptions come into play right away.
Boz: Okay.
Sharona: So what are we doing to handle kind of all of those? First of all, are we still using traditional tests and quizzes? Yeah, largely we are. Yeah. And many of them are in a traditional setting, meaning they are in class timed and proctored to a large degree. Not so much on the statistics, but in some of my other ones they definitely are. But what are we doing to actually generate all that crap?
Boz: So this, and this is where I was trying to get to okay. I know we've talked about this before, that when it comes to teacher workload, in fact we did a whole episode on teacher workload, you've gotta utilize your tools. We're not trying to hide the fact or candycoat the fact that this can be a lot of work. And that's why you need to utilize those tools that are available to you to maximize the amount of work your time gets done.
So for any of these things that we have to do, but especially for assessment generation. So I know one of the things, and I think we've talked a little bit about it before, but one of those big tools that we use especially in your role as this coordinator is programmable test generation. A program that we use, we code in the type of questions we want, and it generates 900 versions of it for us.
Sharona: Right. Well, and to clarify, we're on CheckIt. It's operates inside a CoCalc environment. CoCalc is more like the system it works in, but CheckIt is the actual tool.
Boz: But yeah using programmable test generation.
Sharona: Exactly. So. Well, but we've even run into issues though. So that's the thing. So we've been using Checkit now for, gosh, eight or nine years. Yeah. We've expanded it beyond just the statistics and we actually use it in all the pre-calculus and calculus courses, which are not alternatively graded yet. Although I have introduced second chance exams, and so we need it for that. Otherwise, I'd be dying. I'm still almost dying. But even that can be challenging coming up because what it doesn't do for us is it doesn't come up with the actual questions. It can handle the versioning.
Boz: Yeah.
Sharona: Right. But it's really, you still gotta come up with the scenarios.
Boz: Which is not as straightforward and simple as it may sometimes seem.
Sharona: Exactly. In fact, as we were preparing for this episode, we were like, okay, what goes into writing good test questions?
Boz: Well before we really get into what goes into good. Why is that necessary? Like why? What do you mean good? Isn't any test question a good test question?
Sharona: Well, as we've all discovered, since I don't know an educator who hasn't written a bad test, it can be hard sometimes to put your student hat on. When you're an expert in your topic and to realize that you've written a question that has things in it that are more likely to trip students up. So they're not even gonna get to what you're trying to assess. Or the question may seem related in your mind, 'cause again, you're an expert, and it just really isn't actually measuring what you think it's measuring.
Boz: See, and I, I definitely, that is a big issue, but I have recently come across a different type of issue. Because this project that I am working on is specifically trying to help prepare students for that state test. I am not writing assessment questions to assess necessarily the student's knowledge or understanding of a learning target. I'm also writing them or trying to write them in a way that's going to mimic what they're going to see on these state tests. 'cause that's another issue. We've talked about this before, you don't give a student a type of question they've never seen for the first time on a test.
That's a terrible way to set up a student for failure. And that's what I am discovering is a lot of people don't realize how different these state tests, especially if you are in a common core state, how different these questions can look from a traditionally asked question.
Sharona: Well, and part of that I think is because, and correct me if I'm wrong, 'cause I haven't seen so much of the SBAC questions, but I think a lot of them try to incorporate some of the non-content standards in the different disciplines. So they incorporate the mathematical practices in the math situation. They incorporate the anchor standards in English. They might incorporate some of the NGSS if there's a science one. Is that part of what's going on?
Boz: Oh, absolutely. But they're also asking questions that, yes, incorporate those mathematical practices and those anchor standards. But the idea of common core, especially common core math, is not just, can you do this mechanical operation of generating this answer. It's, do you actually know what the hell this means? Like it really is trying to get students to think more like mathematicians than a computer algorithm that's encased in you know, blood, bones and muscle.
Like.
Sharona: Well that's when I say the mathematical practice standards, I think that is what that is. Right, because when you look at the mathematical practice standards, it does talk about communicating mathematically and thinking mathematically about patterns and process and stuff like that.
Boz: Here's a really easy example. I think that most people, even non-math educators could understand.
Sharona: Okay.
Boz: A real common algebra one skill and a real common high school level assessment might be around the ability to simplify. To take an expression and do some manipulation to it and simplify it in one way. Most often when we test that it's, here's this thing, simplify it. Or sometimes we might go in reverse, here's this polynomial, can you factor it into its linear factors? That's what we ask.
That same skill, that same learning target on the SBAC might say identify all of the expressions that are equivalent to this one, and it will give one, but it could go either way. Some of your answers could be the simplified version. Some of it could be a way that's partly simplified, and one might be one that's actually more complex but is equivalent. And the student has to pick out all three of them. So it's not just can I do this and simplify it? It's, I've really gotta be able to recognize how these different looking expressions are actually equivalent to each other, because I truly know how to go both directions. And if a student is seeing that for the first time, then of course they're not gonna do well on it.
Sharona: Right? But again, as a mathematician, I can see the importance of that skill, right? The importance of those equivalent expressions. And I love the fact that the SBAC is driving that because hopefully if the instructors are seeing that that's on the SBAC and they're bringing that into their classrooms, it becomes a more day-to-day practice. And it's a hugely, hugely important mathematical skill.
Boz: Well, we can have a whole long debate about whether or not we're seeing that more in the classroom. But
Sharona: I'd like to is the point. I would like for it to be there as a math.
Boz: Yeah. But my, my point is the traditional way of assessing that learning target doesn't work. Not if I am trying to do SBAC prep. So that's the projects that I've been working on is generating these problems to serve as SBAC prep. So I can't do just your traditional, which does bring up an issue with trying to come up with enough of these questions that they can really be used as review.
Sharona: Well, I think it brings up actually two issues. It's the quantity. But it's also the time involved on the part of the thousands of math instructors to come up with these things. But if you had like a centralized bank or something, then many of these would be so publicly available on the internet that they might become useless as well if the questions and answers are there. Right?
Boz: Well that's just, I mean, there, there is a database that has these questions, and they do have the answers, and that's what's kind of nice about them is it doesn't take much to change those up to make it to where I'm not copying and pasting these questions, but it is still a limit, when you think about the traditional ways of, of doing these questions versus these. There is a huge, huge discrepancy in areas to pull from.
Sharona: Exactly. And that's math, which is great, but the SBAC only tests what two subjects? Math and English?
Boz: Math. Well, okay. We gotta be careful with that. There is a math and a ELA proportion, but part of that ELA is testing their writing ability.
So this is how SBAC and common Core got really clever. Some of those writing standards are called writing in history and science. So it's writing. They're getting tested on writing about historical things. Okay. Or writing about scientific.
Sharona: Interesting. I don't think I knew that.
Boz: Yeah. There the, there's four or five of the writing standards that are abbreviated. It's like writing in history, science and technical arts or something like that.
Sharona: The point, the point that I was going to though before I kind of sidetracked us with that statement, but if I'm in a discipline either in higher ed or in in K 12, that is not one of the ones that's in the SBAC itself, those resources of preexisting and aligned to the state testing standards, those are not usually available. Right? And I don't know about you, but I have spent hours and hours on the internet trying to find good materials for some of my courses.
Boz: Yeah.
Sharona: So I have been playing around with recently and this is where I wanna go next, but if you don't wanna go there, let me know. I wanna go look at ai.
Boz: So if we're talking about test generation and we're talking about trying to come up with multiple versions and maybe, 'cause a big thing with K 12 is differentiation, can we use AI to help us do this?
Sharona: And a year ago I would've said, nah, this stuff is junk. Well it's not junk anymore. It is not junk anymore. One of the things, as we've been prepping for some of our authentic assessment stuff is, we've been saying, Hey could we use AI to do some of this stuff? And the answer very much is yes, as a starter. I'm not someone who's ever gonna take anything that anyone gives me.
I don't care where I got it. I'm going to alter it. Which drives you crazy sometimes.
Boz: Yes, because when you say you don't care where you got it, that includes if you got it from yourself.
Sharona: That's not quite true.
Boz: Oh, it is too. We have never worked on a single thing from one semester to another where you didn't alter it some.
Sharona: Okay. Yes, this is true. But anyway and it's not because I think I'm better than everything. It's just I need to put my hands on it. And I very rarely find stuff that, well, I think there's some things I've picked up from you that I haven't altered though, to be honest, but that's rare. So anyways,
Boz: I find that hard to believe, but, okay.
Sharona: But literally, so chat GPT and Gemini have been the two I've been playing with. I have a paid subscription to chat GPT but I do not have a paid subscription to Gemini and it's pretty amazing what you can do. So, for example, we put a prompt into Gemini the other day, another colleague I was working with. And it was a pretty basic science problem. It was, you know, a wave has a wavelength this long and a speed of this, what's the frequency of the wave? Very, very straightforward. Single answer. You either know it or you don't.
And we said to Gemini, make this question more authentic, including ambiguous answers, real world complexity and alignment to the NGSS standards. So it analyzed the question, it looked for potential alignment to NGSS, and it gave us scaffolded structure to different ways to modify it.
So it gave us option one, add context and measurement. It took this thing and gave us a scenario with a sonar device on a boat and some ambiguous answers, including discuss one factor that might cause the actual speed of the sound wave to be slightly different from your calculated value. And then it gave a totally different option and said, introducing different wave types and conceptual understandings. And then it gave the answer to the original question, correctly with proper formatting. Then it gave some other questions and things about modifying it.
So all the things that we wanna see. Getting it aligned to a learning outcome, getting different scenarios that include ambiguous options for answers for students to explain their thinking. It did in about 12 seconds. So what I think is interesting about this AI use is everything that I know that I wanna do, everything that I would task myself with doing, aligning it to the learning outcomes, coming up with more relevant real world examples, coming up with ambiguous answers, I can just ask it to do it for me, and it's gonna jumpstart my thinking.
Now again, would I use any of this as is? Probably not. There might be some language I don't like. I might have to think about maybe instead of doing it in a situation they've never seen, I might wanna do it in a situation that we talked about in class, but it gets me 75% of the way there.
Boz: Yeah.
Sharona: And then you recently were picking on me 'cause I gave you a bunch of problems to code in CheckIt and what happened? I was like, here's five scenarios and what did you ask me? Do you remember?
Boz: Oh, yeah. No, I, you gave me these, we were trying to do, you have a learning target that you were trying to assess about the application use of trigonometry. You gave me these 10 different questions to code. And I think eight of them had the word drone in it.
Sharona: It was not that high. I think it was six.
Boz: No, it was, it was, there was one from each set. 'cause five of them were two sides and an unknown angle and five of them were one known side and a known angle. Four out of the five of each scenario had the word drone in it. So like, yeah, I, I did, I texted you like, what is your deals with drones lately? Like have you invested in them or like, what are you doing here?
Sharona: And what I had done is I had asked chat GPT to generate versions of this problem that had real world connections. Because one of the things that I'm trying to do, especially with trigonometry, is to get students to understand why it matters. Most of our students are engineering students, and we're hearing from the engineering faculty that they're coming out of our course not understanding why right triangle trigonometry is so important. So one of the ways I'm hoping to get them to get that is giving them interesting scenarios on assessments and therefore on study guides and backwards designing it into the class to say, Hey, this thing's actually really useful and in topics you might care about.
Boz: But I think it did highlight one of the issues with trying to use AI to help you do this. It ended up giving you four out of five things about drones that if, if these weren't meant to be spread out amongst various different assessments. If you hadn't, I probably would've kicked these back to you and go, Hey, can you change the surrounding context of these questions a little bit more? Cause these are all about drones.
Sharona: And so the thing is, I have done that sometimes where I've felt there's too many in a certain area and I will give chatGPT a direction to say, Hey, could you give me some more, but this time make it about entertainment or this time, make it about weather or this, you know? So I'm still using my expertise and my knowledge to come up with different topic areas because I'm thinking also about my students. We live in LA so I know that a couple of the ones that I sent you were about camera angles for film shoots because the film industry is a big deal here in Los Angeles. So I'm trying to be relevant, culturally relevant to Los Angeles specifically. Not necessarily relevant based on demographics, but based on where we live.
Boz: Yeah. And which kind of brings up another point, especially something that we deal with in the K 12 world where differentiation is not something that's new to us. it's been an expectation for as long as I have been in education. We are supposed to differentiate for our students and we're supposed to be doing culturally relevant assessment and teaching. This is something that I think AI can be such a huge time saver. 'Cause I can write the question that I want and then throw it in there and say make a question that is around soccer. 'cause I know I've got half the soccer team in one of my math classes or do the same type of question, but make it around A, B, or C, because I know I've got students that are interested in those. It saves me so much time and you know how absolutely phenomenal of a writer I am. Saves me so many, so many small grammar errors trying to change from one scenario to another.
Sharona: Well, and I think what's really interesting about what you just said about your grammar errors is you actually can write completely, grammatically correctly and very well. It's just gonna take you even more time.
Boz: Yeah. That's the problem.
Sharona: Than it would take me as a fundamentally really good writer. Yeah. You know, I've had the training and, and the background to have. I am a very good writer. I'm not a writer of like, stories and fiction, don't ask me to do that, but when it comes to nonfiction writing, I'm really good at it. It's still gonna take me, let's say, 20 minutes to come up with a scenario for you to write it grammatically correctly might take what? An hour? If it's gonna take me 20 minutes. And chat's gonna do it in 12 seconds.
Boz: Exactly.
Sharona: And you know what? I don't have a problem with that because by the time you become a teacher, you have really gotten pretty good at the fundamentals of your discipline, one would, one would hope. And I think is true for 97.3% of all teachers are more than that. I actually do think that most teachers who go into disciplinary based stuff have a love of and an understanding of their discipline. And so therefore, this is not a shortcut to learning the way it might be if our students were doing this. So I have real concerns about our students using AI in inappropriate ways. I have much less concern about our teachers doing it.
Boz: Yeah. All right. So we've talked about two big styles of tools that can help cut down the amount of teacher time it takes to generate assessments or assessment questions or anything like that. We've got these programmable generators that is great at once you've designed a question, it can multi version it. Like I said, we actually have it generate 900 different versions for us. We've got some AI tools that can go beyond just the versioning, but actually do some modifications. There is another big one. And I would say this is actually, even though it can't always be used, is probably the single biggest time saver when it comes to this. And at your talk, when you were at New York of the last week, you actually had a teacher come to this realization themselves. Yeah. So I'm gonna let you talk about what that is.
Sharona: So when we go back to this definition of evidence of learning, when that's when we say multiple attempts of evidence of learning, many of our disciplines cycle through a semester. They touch on a topic and we move on, but those earlier topics show up as needed in later topics. So the biggest time saver might just be aligning your assessments so that something that comes early, you can check that evidence in something later.
So the example that happened in our math discussion, I was discussing with a math department, is the department chair had an aha moment where he said, oh, if I'm checking to see if they can graph a line, I can check it when I teach them to graph a line. I can also check it later in the semester when we're doing systems of equations, which is multiple lines because they have to graph more lines later than the first line.
Boz: And so yeah, maybe, maybe as a student, when we were doing graphing the linear lines in probably one of the first units in algebra as a ninth grader or even as a eighth grader. But after three more additional weeks of doing this, and now we're getting into systems. I've caught, you know, I've caught back up with my basic skill of graphing a linear line, and I can demonstrate that without doubt solving systems by graphing.
So is that evidence of learning? And this really goes back to what Dr. Patrick Morriss was talking about on his episode a few episodes ago was he wants to find learning where he can find it. He wants to, when a student provides evidence of learning, let's count it like, let's just because I've got a learning target. I've done assessments on that learning target. Does that mean that's the only way I can assess that student on that learning target? When students provide you with evidence of learning, whether it's on a test, whether it's on a quiz, whether it's on a a project, whether it's on a presentation, whether it's on an, an oral discussion that you are having with the student, evidence is evidence.
Sharona: So that was on episode 85, which was instructor Beliefs in the role in the classroom with Dr. Patrick Morris. But that's exactly it. If we have set ourselves in a mindset that whenever we're interacting with students, we're looking for evidence of learning, it doesn't always have to be like, I like to plan in at least a minimum number of attempts at something, but I'm happy to look for that evidence elsewhere. It really occurred to me when in our statistics class, for example, we have these content standards and all of the content standards this semester have four chances at demonstrating sufficient learning, and students have to do it twice. But we also have these mathematical practice standards. There's three of 'em that we use in our class. There are 17 identified opportunities on some of those mathematical practices for students to demonstrate evidence of learning. Not because we wrote 17 assessments for that purpose, but because those mathematical practices go throughout the course 17 different times. We as instructors have looked at it at a content based assessment and said, oh, there's a place where we can look for evidence of learning here, so let's mark this as being available for that evidence.
I.
Boz: Yeah. And you know, I've actually taken it even a step further before one of those practices that you're talking about is I can create a viable argument and critique the argument of others using quantitative reasoning. Like that's one of our mathematical practice standards actually, that we took from Common Core. It's one of the eight standard practices of mathematics for K through 12. But a couple of years ago, I had a student that in the middle of the semester had some real issues. Ended up coming back. Ended up explaining and trying to recover her grade. Well it got down to, you know, almost to the end. And the student was, you know, still struggling to get the number of learning outcomes just because she did. She for reasons outside of her own choosing her own control. But she came to me and argued with me about an assessment that I had given her a non mastery grade yet. She came and argued with me. And even though her argument didn't end up changing my grading on that assessment? What did I do? I went and gave her mastery for her for that MP two for that argument, but she had demonstrated the ability to make a viable, logical argument.
Now, did that student eventually get that other learning target? Did, partly because we argued this for 10 minutes, which really helped her see where her problem was, so the next time she killed it. But yeah, she had demonstrated with that argument, even though I didn't agree with. Her conclusion, she had demonstrated to me her ability to argue and use reasoning to do that. So yeah, I took that as evidence of learning.
Sharona: Well, and that's the thing is when we as instructors are extremely clear, not only on what we're assessing, but what enough evidence looks like, that evidence shows up. And it's completely fair and legitimate, as an instructor, as long as I can document it, to say, this student showed this evidence and this is how they did it. It was on this test, it was on this quiz, it was in this conversation, whatever it was. Because I can document, say, this is what the argument was, or this is the precise use of tools, or this was the pattern in structure I was looking for. Whatever it is.
Boz: And I think that is an extremely interesting point and I think that is a great point to start wrapping this up 'cause we are coming up on time. So any last minute thoughts or comments?
Sharona: One of the things I advocate for is allowing yourself to, to create more complex assessments within which you can look for simpler evidence of learning. So students don't have to get something completely correct on a more complex thing for you to be able to find interesting evidence. So I, I'd like to challenge everyone who's listening to think more holistically in general about ways of gathering evidence of learning and using the tools that we have to do that. That's my final thought.
Boz: All right, well, I think that's a great place to end it, so I want to thank everybody and we'll see you next week.
Sharona: Please share your thoughts and comments about this episode by commenting on this episode's page on our website, www.thegradingpod.com. Or you can share with us publicly on Twitter, Facebook, or Instagram. If you would like to suggest a featured topic for the show or would like to be considered as a potential guest for the show, please use the Contact us form on our website. The Grading podcast is created and produced by Robert Bosley and Sharona Krinsky. The full transcript of this episode is available on our website.
Boz: The views expressed here are those of the host and our guest. These views are not necessarily endorsed by the Cal State System or by the Los Angeles Unified School District.