Artwork for podcast The Grading Podcast
130 - Rubric or Scoring Guide: Why Clarity Matters and How to Build Effective Rubrics
Episode 1306th January 2026 • The Grading Podcast • Sharona Krinsky and Robert Bosley
00:00:00 00:50:00

Share Episode

Shownotes

In this episode, Sharona and Boz discuss the recent grading controversy at the University of Oklahoma and use it as a launching point to focus on why rubrics matter so much to grading integrity, consistency and student learning. They reflect on how loosely defined criteria invite subjectivity, create wildly different grading outcomes for the same work, and leave students guessing about what “counts” as quality.

Rather than debating the specific incident, they dissect the difference between scoring guides and true rubrics, the importance of clearly defined performance levels, and how rubric design shapes whether grades function as feedback or as punishment. The conversation emphasizes rubrics as communication tools—meant to make expectations visible, learning improvable, and grading decisions defensible.

Ultimately, strong rubrics are not about compliance or point allocation, but about aligning assessment with learning goals, supporting revision and growth, and reducing the hidden curriculum that traditional grading too often creates.

Links

Please note - any books linked here are likely Amazon Associates links. Clicking on them and purchasing through them helps support the show. Thanks for your support!

  1. University of Oklahoma student claims religious discrimination over failed essay: What we know
  2. OU Essay Controversy: What Happened in the Samantha Fulnecky Case
  3. How to Design Effective Rubrics
  4. Rubrics in higher education: an exploration of undergraduate students’ understanding and perspectives
  5. Steps to Designing a Rubric (Video)
  6. Rubric for the Deaf and Hard of Hearing Concert (A. Ransom)

Resources

The Center for Grading Reform - seeking to advance education in the United States by supporting effective grading reform at all levels through conferences, educational workshops, professional development, research and scholarship, influencing public policy, and community building.

The Grading Conference - an annual, online conference exploring Alternative Grading in Higher Education & K-12.

Some great resources to educate yourself about Alternative Grading:

  1. The Grading for Growth Blog
  2. The Grading Conference
  3. The Intentional Academia Blog

Recommended Books on Alternative Grading:

  1. Grading for Growth, by Robert Talbert and David Clark
  2. Specifications Grading, by Linda Nilsen
  3. Undoing the Grade, by Jesse Stommel

Follow us on Bluesky, Facebook and Instagram - @thegradingpod. To leave us a comment, please go to our website: www.thegradingpod.com and leave a comment on this episode's page.

If you would like to be considered to be a guest on this show, please reach out using the Contact Us form on our website, www.thegradingpod.com.

All content of this podcast and website are solely the opinions of the hosts and guests and do not necessarily represent the views of California State University Los Angeles or the Los Angeles Unified School District.

Music

Country Rock performed by Lite Saturation, licensed under a Attribution-NonCommercial-NoDerivatives 4.0 International License.

Transcripts

130 - OU Rubric

===

Sharona: Now what I thought was really interesting though, and one of the reasons I wanted to talk about this wasn't so much the controversy. I do like the assignment as a kicking off point, but I wanted to go and dive into rubrics if we can, because I think that there's a rich world that we can explore with what makes an effective rubric, whether or not you're doing alt grading to be honest.

Boz: Welcome to the grading podcast. Where we'll take a critical lens to the methods of assessing students', learning from traditional grading to alternative methods of grading. We'll look at how grades impact our classrooms and our students' success. I'm Robert Bosley, a high school math teacher, instructional coach, intervention specialist and instructional designer in the Los Angeles Unified School District and with Cal State LA.

Sharona: And I'm Sharona Krinsky, a math instructor at Cal State Los Angeles, faculty coach and instructional designer. Whether you work in higher ed or K 12, whatever your discipline is, whether you are a teacher, a coach, or an administrator, this podcast is for you. Each week, you will get the practical, detailed information you need to be able to actually implement effective grading practices in your class and at your institution.

Boz: Hello and welcome back to the Grading podcast. I'm Robert Bosley, one of your two co-hosts, and with me as always, Sharona Krinsky. How you doing today, Sharona?

Sharona: I am doing extremely well as of about 30 seconds ago. So for a little bit of context, we're recording this a little ahead of time because the beginning of January is gonna be pretty crazy, but I think it's gonna come out in January. And I just got back from my son's military commissioning ceremony in Green Bay, Wisconsin. There was just this little part of me that was still a little bit worried, so if you listen the podcast at all, one of my sons has a lot more issues with his grades in college and one of my sons has a lot less issues. And so even though he commissioned in the military, his grades for his final semester were not yet in, and literally as we were pushing record, I checked his grades and they're in, it is official. He has graduated college. I am so excited. So that's where I'm at right now. How are you doing?

Boz: I'm doing pretty well. We are recording this before Christmas. I know it's gonna come out a little bit after, but as I would call myself a typical male when it comes to shopping, I have done good this year. I finished my shopping before December 24th.

Sharona: Wow. I'm impressed.

Boz: But not by much.

Sharona: I used to, when I was a kid, do a gift wrapping booth, but it was you wrapped gifts in cans. It was a fundraiser and so the number of men that would come to the can wrap booth on the 24th, and I would put engagement rings in, like they would bring a pound of coffee and an engagement ring and we would pour the coffee in the can and then put the ring in the middle and then we would seal the can. And you had to open the can with a can opener on Christmas. Yeah. But yes, December 24th, skewed heavily male at the can wrap booth.

Boz: I resemble that remark.

Sharona: You're like, I want a can wrap booth.

Boz: But you know there's an interesting thing in kind of national news right now since we're talking about gender norms. And it's actually coming out of my home state of Oklahoma. So I don't know if our listeners have been paying attention to this, but there's been a hot story that's come out of the University of Oklahoma that is partly dealing with gender roles. We're not gonna get into that part, but we are gonna talk a little bit about the situation.

Sharona: So what I think is interesting about the situation is not only does it deal with gender roles, but it really hits on grading.

Boz: Yeah. And that's the part we're gonna focus on. For our listeners that don't know what we're talking about, can you give a real quick summary of what this national news event, what are we talking about?

Sharona: Absolutely. So what happened is there's a student at the University of Oklahoma who claimed religious discrimination because they failed a psychology essay. There was an assignment given in a psychology class. The assignment was based on an article about gender typicality and gender non typicality. And the student turned in the essay and received a zero out of 25 on the assignment.

Boz: So not just a failing grade, actually received a zero on it.

Sharona: Right and used two grade appeal processes in the University of Oklahoma. There was a grade appeal and then a religious discrimination claim, and the university has gone through their processes and has removed this graduate teaching assistant, it was a graduate teaching assistant instructor, from instructional duties for the remainder of the semester and probably for future semesters as well.

Boz: Yeah. So we are not going to debate which side is right or wrong. That is not our purpose, that is not our goal. We might give some of our personal beliefs or opinions on some of it, but we're not here to take sides. We're not here to say the student was right, the university was right, or the teaching assistant was right. But

Sharona: I would say that if we do give an opinion, it would be about the process and not the content. So we're not gonna talk about like the religious discrimination claims or any of that kind of stuff, but we are gonna share, I know I'm planning to share some of my experience with university processes in this situation.

Boz: Yes. But we definitely wanna look at, since this focused on the grade the student received and that process of how the student got that grade and how we might do better with how we set up our grading criteria.

Sharona: Exactly. And that's where I wanted to make sure is that people know we're not gonna spend this whole episode on the situation, but there is some very specific elements to the grading process in this situation that I think are important things we should learn from, and especially as we go to alt grading, some of the things that we do actually directly help with a situation like this, but we could also screw them up.

Boz: Yeah. Absolutely.

Sharona: All right. So I'd like to kick us off by reading what the assignment was. I have the description of the assignment. I'm not gonna read the whole thing, but I'm gonna read the first paragraph. Okay? It says you must write a 650 word body of text, double spaced reaction paper, demonstrating that you read the assigned article and includes a thoughtful reaction to the material presented in the article.

That's essentially the entire assignment. So the students were given a an article to read, and they're supposed to write a 650 word double spaced reaction paper that demonstrates that they read the article and includes a thoughtful reaction. Now, the instructor goes on to say, points will be deducted when papers are deficient in any of these areas. I will deduct 10 points if your paper is between 620 and 649 words, and I will not give credit for papers under 620 words. Papers not turned in by the deadline, will not receive credit.

Boz: I could probably go a good 10, 20, 30 minutes on these last two lines. That's not the part we're gonna focus on, but I did have to point that out. And I know some of our English and writing professors are reading that last part and their screaming .

Sharona: And I wanna point out, this is a 25 point assignment, so if you don't hit 650 words, turned in on time, you will automatically fail the assignment. 'cause the best you could get is 15 points.

Boz: Now we have no idea how big that 25 points is in this course. We, neither of us have found like the full syllabus of this instructor, so we don't know if that is out of a hundred total points throughout the whole semester or if that's out of 10,000. So that, that may or may not be that big of a deal.

Sharona: Now the instructor does go on to say, please remember that your reaction paper should not be a summary, but rather a thoughtful discussion of some aspect of the article and the instructor goes on to give eight possible approaches to the reaction paper. So there's a lot of examples of ways you could do this.

Boz: Yeah. And even beyond that, that eight right below that, there's a line that says. There are other possibilities as well. The best reaction papers illustrate that the students have read the assigned material and engage in critical thinking about some aspect of the article. So I get where they're going with this. I get where they're going with the assignment.

Sharona: And this is a mid-semester assignment, so there's a good chance that they've already gone through several of these and they've, this may be something that is pretty well known in the course of what expectations are.

And I believe this is like a:

Sharona: I believe it's a junior level. Yeah. So I don't know if two thousand's the right number. I read in one of the articles, it was junior level, but.

Boz: Okay. I read it was a:

Sharona: So then it goes on to give what most people, I think would call the grading rubric. And there's three lines on this rubric. One of them is criteria. It says that there needs to be a clear link back to the assigned article. Can the reader assess whether the student has read the assigned article? And that's 10 points. The second one is, does the paper pre provide a reaction, reflection, discussion of some aspect of the article rather than a summary? And the, and that's also 10 points. And then the third one is, are the main ideas and thoughts organized into a coherent discussion? Is the writing clear enough to follow without multiple rereads? And that's five points.

Boz: Okay. So my first thing right away, 'cause everything I have read is talked about the rubric and it not fulfilling the rubric, that's why they got a zero. I would not call this a rubric. I would not call this a rubric at all. This is a scoring guide or a scoring criteria. But I think that's where the first place we need to start having our real discussion is what is a rubric? And then on top of that, what makes a good rubric.

Sharona: Absolutely. And so I'm gonna share, I did a little bit of a dive into rubric and there are two different definitions that came up and I wanna just share both of them and see what you thought. And then we can go back to decide if this is a rubric. And additionally, I wanted to make one quick point that is specific to this podcast, which is rubric versus proficiency scale. So we use the words proficiency scale a lot, and I think that the proficiency scales are essentially rows on a rubric and that the rubric is for an assignment. Do you agree, disagree?

Boz: It depends on how we define rubric. The way I define and what I think of a rubric, yes, the proficiency scales are the individual rows of the rubric with the description of each category, and then a collection of those, if you are assessing multiple learning targets or standards or criteria, all of those put together. The whole tool itself is called a rubric, but I thought we were gonna do the definition first. So what

Sharona: I decided to throw in the clarification, but now we can actually talk about the definitions.

Boz: Okay.

Sharona: So the two definitions that I have. One of them comes from the University of Nebraska Lincoln, which I think they're just taking over the podcast here. We've had a couple of episodes recently with faculty from there. Says that a rubric is an assessment tool used to provide students with detailed information on the expectations used for grading an assignment. That's one. And the second one comes from some research by Panero and Romero. That says that rubrics are documents that articulate the expectations of an assignment by listing the criteria for what is particularly important, and by describing levels of quality on a scale from excellent to poor. Rubrics have three features, assessment criteria, a grading strategy, and standards slash quality definitions. So I feel like. Those definitions are the same. The second one is just more detailed. I don't know.

Boz: Yeah the first one I think is broad enough and general enough that if I apply this scoring criteria from the class we're talking about, I would say that it poorly fits that definition, but it fits it.

Sharona: Okay.

Boz: When I am thinking of a rubric, and again, we've talked quite a bit on this podcast about different types of rubrics and scoring criteria, how to use the rubrics. When you ask me what a good rubric is, especially for alternative grading. It goes more with that second definition. 'Cause I think that second definition does give a little bit more details that if I apply that criteria to this scoring criteria, it does not meet that.

Sharona: Well, and the part that's missing, quite frankly, is the proficiency scale. The standards are quite quality definitions.

Boz: Yeah.

Sharona: Because in what I read from the Oklahoma assignment, it just said, does it do this? 10 points. Like how are you gonna allocate those 10 points, right?

Boz: For it to be a rubric by that second definition, and by what I personally would define as a rubric, yeah, you would need some sort of qualifications, 10 to eight points. You do this, seven, seven to five is you did this, and you would need that for all three of those different point allocations. So again, the rubric that was on this assignment was 10 points for clear tie to article, 10 points for reaction content, and five points for clarity of writing. It would need to, however you're gonna break those points up, and this is why that is so important and why I would not call this a rubric. I bet you we could give this to 50 different people and we might have some of the same totals at the end, but I bet we'd have 50 different ways that people would break this down.

Sharona: Why don't we try it with just the two of us? So you and I haven't been in traditional grading in a while, but we still can walk the walk and talk the talk. And

Boz: I did traditional grading longer than I did alternative grading.

Sharona: I wanted to point out. The student, one of the things that triggered this is the student was given a zero. Like specifically a zero. So why don't we see what.. Now we've read most of the student's essay. We've read the prompt. We actually went and looked for the original article that they were reflecting upon. Were you able to assess whether the student read the article?

Boz: Okay, so I have already written these down. Okay? So I would recommend you write yours down so you're not influenced by my response, or vice versa. So we do have a, all right, so in mine, I gave this a zero. Because the only tie in that writing back to this article was the mention about people teasing on gender roles. This article that they were talking about, or were supposed to be talking about, was a research article that goes into so much more than just gender teasing. In fact, you could have gotten everything that was referenced in the student's paper from reading part of the abstract, not even all of the abstract. So yeah, I would've absolutely given that a zero.

Sharona: So I would've given it a two on that line.

Boz: Okay.

Sharona: And my rationale is number one, they got one point for knowing it was about gender and they got one point for mentioning something other than the title.

Boz: Okay. So we've got a two point difference already, which doesn't seem like that big of a deal until you realize this is outta 25 points, that's 8%. That's almost a full letter grade.

Sharona: Yep. Okay. What about the reaction content?

Boz: So the reaction, I gave it an eight, what would you have given?

Sharona: I was waffling between a seven and a six. And I think it would be a six. Because 90% plus of the material was referencing an outside source that was not directly on point, in my opinion, with the content of the article. So I saw that there was a big disconnect between the outside material and the article itself.

Boz: See, and I don't disagree with you with that disconnect except the part that was being referenced, which it's being referenced from the Bible is in support of their opinion of the reaction, which is why I gave it. I think that's why I gave it a little bit higher than you did.

Sharona: My problem is I disagreed because the article was about gender typicality. So the article was about the more gender typical you are, the more popular you might be, the less gender typical you are. But all of the reaction was about whether or not there's a gender binary. That's not gender typicality. You could have a gender binary and still have atypical presentations of gender. So I didn't feel the connection as much. I didn't think that the reaction was act, the reaction was in the, oh, we're talking about gender. It didn't have that academic connection to gender typicality that I would've liked in what I would consider a eight out of 10 B level paper.

Boz: Okay.

Sharona: So I gave it a six because to my mind, they completely missed the point of the article.

Boz: All right. And then the last one, clarity of writing. What did you give it?

Sharona: I gave it a three. It was like me. It was basically a, in my mind, I was looking three out of five. It's a C level ish, so I was like, meh, I give it a meh.

Boz: See, and I gave that a five. Okay. Partly because you're a much better writer than I am, so my standards of writing are probably different. But yeah, when I read this, I didn't have to reread anything. It was clear. So I ended up giving it a five, which means my total would've been a 13.

Sharona: And I gave it an 11. But the student got a zero.

Boz: Yeah. But the student ended up getting a zero, which is part of the, I think the bigger issue of why what happened afterwards, but we are two people. We're two people that have worked together for years now and have a lot of common practices. Even though our scores aren't that much, again, they're 8% difference. That's almost a full grade. They're eight percentage points difference. But we, if you look at our individual scoring, they were very different.

Sharona: Yeah.

Boz: So yeah, imagine giving this to 50 people, and we do something like this in one of our trainings, one of our early trainings with that we call What's wrong with traditional grading. We give an assignment, give some criteria, and ask people to score them on a 10 point assignment. We often get a range of almost the full 11 points from zero to 10.

Sharona: So we very rarely get a 10, but we'll get zero to nine. Absolutely. Yeah. So I think what's important about this is that how is a student supposed to know what to do if that's the only information that they have? The exact same work from you, from me, from this instructor, from another instructor. They could get wildly different grades. And even if all of those grades are technically what you would call a failing grade, as we've talked about in our grading, is the misuse of math talks and things like that. There's a big difference to a student's grade of a zero out of 25 versus a 13 out of 25. Mathematically.

Boz: Yeah. Oh yeah. Mathematically that's a huge difference. And again, how big of a difference overall? Depends if this is 25 out of a hundred points in the class, or 25 out of 10,000 and we don't know. We couldn't, like I said, we both looked, we couldn't find that anywhere. But yeah, that, that. 11 to 13 point difference could be a huge impact on the student's grade overall grade.

Sharona: Now, what I thought was really interesting though, and one of the reasons I wanted to talk about this wasn't so much the controversy. I do the assignment as a kicking off point, but I wanted to go and dive into rubrics if we can, because I think that there's a rich world that we can explore with what makes an effective rubric, whether or not you're doing alt grading, to be honest.

Boz: Okay. But before we do that. Okay. I wanna ask you a question. So yes we're gonna get into rubrics and how rich and how a good rubric here could have fixed a few things. But before we do that, this is obviously a traditional graded class. How would just doing alternative grading have changed this? You are doing alternative grading. You get this kind of paper. What do you do?

Sharona: Oh, they get, not yet across the board, revise. Revise in every row of the rubric with details. So my revise on the article is you didn't provide enough evidence that you actually are on topic for the article. We need more evidence, quotes, citations from the article revise that. Second one, your reaction does not seem to be directly connected to the content of the article. 'cause again, like I said, I want them to be talking about the gender typicality point of the article. Not, this is not a straight, is there a binary or whatever. Then I would probably go into the lack of support for their arguments in the third row. It would be revise across the board.

Boz: And I think that's the point I was trying, I wanted to bring up, it doesn't matter if it's zero points, 11 points or 13 points if we're doing alternative grading. This is either this isn't mastery and this isn't gonna count towards your evidence of mastering this learning target. Or it would've been a revise. It would've been, yeah, you kinda missed the point here. Or you're not giving me enough evidence to show that you've actually read this article. Regardless of, like I said, it wouldn't have mattered. The biggest problem with this particular situation is that the student got a zero and that zero is what they fought about. What the arguments and stuff. Doing alternative grading? Revise. Here's the aspects that you didn't hit.

Sharona: And you said the biggest problem was the zero. What I'm actually more hurt by is this student had no opportunity to learn that they can argue better, that they can analyze better. If this student wants to disagree with the content of this article and disagree with the premise, they're allowed to do that. I will fight for their right to do that. But I would like to turn this student into a much more critical thinker. Even if I don't change their beliefs, I don't need to change their beliefs. But, they could have made an argument that made it clear that they had read and understood the research in the article about the relationship between gender typicality and popularity within a high school context. 'Cause that's an important topic.

Boz: Yeah and I agree with you completely that one of the points of this assignment for me would've been that critical thinking. I would imagine that was part of the intent of the instructor. I, we've not talked to the instructor, so I can't say that for sure, but I would bet it was. And you're right, you can do and have critical arguments and show critical thinking and still show these same arguments that this student had. Even bringing in the religious part, which again, neither of us agree with, but that's not the point.

But yeah how do you do that? How do you have that argument in a critical way. So you're right, if the point of this assignment was to foster and to develop critical thinking, the student really hasn't had any ability to do that because if this was just a one at zero and it's done. If we're doing alternative grading either they're gonna need to reassess this some other way or revise it? That learning opportunity is there.

Sharona: I completely agree, and I know that it's hard sometimes to teach students who disagree with you, especially if they disagree with some of your fundamental beliefs and your identities as a professor. But that's our job. That is our job, is to enable students to learn to express themselves, their voice. At least that's my belief. I actually think that there are people out there who vehemently disagree with me. But that is my belief system. And that's a big part of why I do alternative grading. I wanna turn them into the best versions of themselves, whoever they decide to be. And so giving them the tools to critically think is important. So can we critically think about what it takes to make a good effective rubric? Are we ready to transition to that piece of it?

Boz: Because we both do agree that this scoring guide was not done well and with a good rubric, a lot of this possibly could have been avoided and some of the consequences to this grad student could have been avoided. So let's look at what makes a good rubric?

Sharona: Okay, so in my digging, I found a variety of different things, but I definitely like some steps that were laid out. I unfortunately don't have the reference for these particular steps, but I wrote them down. And so I'd like to walk through some of these and see what you think about them.

Boz: Before we even get that far. I do think we need to step back and first, a good rubric is more than just a scoring guide. Going back to your definitions earlier, right? I think we really need to use that second definition.

Sharona: Yes. So we wanna have three things. Assessment criteria: so what are you going to be evaluating this assignment on? What dimensions? A grading strategy: which for those of us in the alt grad grading world usually means some combination of having the actual proficiency scale. So knowing when is good enough? As well as possibly what you have to get on the assessment as a whole for it to count if you're doing like specs.

Okay, so for example, in my history and math world, you had to get all of the non learning outcome specifications complete plus you had to hit at least one learning outcome to a meets expectations. And then the actual standards or quality definitions that go within each of those rows. So we're gonna look at all three of those things.

Boz: Okay.

Sharona: So the first thing you want to do is to define the purpose of the assignment and the criteria that you're gonna be using. So usually that means what are the learning goals? Are there specific learning outcomes that are tied to this assignment? What do you want the students to get out of it? What evidence of learning is this particular assignment or assessment? Now, in my world, the thing I'd like to say is that everything is formative until a student succeeds, and then it's summative. So every single assignment that I am scoring for accuracy, or for meeting criteria, is something that could end up being terminal. So I don't really distinguish between formative and summative the same way that maybe some of our listeners do.

Boz: Okay.

Sharona: So for me, on a rubric, for any of the classes I do, every row is either a specific learning outcome with its associated proficiency scale, or it's some other criteria that, for me, is usually on a binary. Either complete or revise.

Boz: And that's the specs aspects.

Sharona: Right. If I have specs on an assignment, that's what it's, if it's more than just these learning targets the rest of them for me would be binary. Did you do this or not?

Boz: So in one of this that I wanna push back a little bit is on where you found this it says that you should pick three to five criteria that reflect the essential dimensions of performance. I think that is a very much standards based rubric. I think if you get into, especially the spec side, that number can go up.

Sharona: I would actually say this particular list came from more of a traditional grading lens. It's not an alt grading lens. So that three to five might be, if you're doing traditional grading, how many individual rows of a rubric should a student have to meet. Or that you would break up the points Yeah. On the thing. Yeah. So I'd just say the three to five, I don't really agree. Yeah, I would agree that's not really relevant to the alternative grading world. 'Cause like my spec side had a lot more than that.

Boz: Yeah. And especially, that's why I wanted to point out. Especially if you're more in the specs world, than say the standards based, then you're gonna have more that's just how specs grading works.

Sharona: And even in standards, if you're doing any of those bigger exams, you might have eight or 10 learning outcomes available on an exam.

Boz: I would disagree with you on that one.

Sharona: I know a lot of people who do. But students might be picking and choosing. Like they might not get all of them done but you, so your rubric might be a lot longer.

Boz: Okay. If a student isn't expected to do the whole thing, then yes, you might have that many. But if they're expected to do the whole thing, I would agree, probably five, maybe six is as much as you could ever again

Sharona: I would say five individual learning outcomes on content. But you might throw your three mathematical practice standards in there too.

Boz: Yeah. I still think that's a little much, but that's, and that's also might be part of where we're coming from in our difference of, 'cause I'm really looking at this through my high school lens and again, unless it's something like a final where the student isn't having to do all of it, I can't imagine having more than five or six.

Sharona: But I think, just as in so many other things, there's a million ways to do it, right? Yeah, exactly. The big thing is you really want to know the purpose of this assignment and what evidence you're gathering.

Boz: All right, so what's the next thing? What's the next thing a good rubric should have?

Sharona: You need to determine the performance levels. And this is where I think this particular scoring tool really fell down is what levels, what's excellent, what's proficient, what's developing, what's beginner, what levels, where's your proficiency scale? For each of these rows.

Boz: And we've done whole episodes on this. This is part of the grading architecture. We've looked at the difference between a two level, a three level, a four level. There's lots of different ways of doing this. I think you and I on a lot of things have really gravitated to the three level rubric. The "this is good enough". "This is close enough that it's, I'm not gonna give it to you, but I'm gonna let you revise your current work". And "yeah, you missed the mark enough that you need to actually reassess the entire thing".

But yeah, that level of how many, basically, how many columns is each of your rows gonna have? Is it gonna be a three point, is it gonna be a five point, whether you define that as expert, proficient, beginning, not yet. We've joked about I did one a four level one for a while where I gave Jedi rankings 'cause I didn't like the language of excellent and satisfactory and almost.

Sharona: And I do think that the more exposure I have to K 12, distinguishing between beginning and developing is a much more realistic, important communication to like parents? Yeah. Than it is at the university level, like the university level, whether you're beginning or developing, you're just not there yet. Yeah. So I do think in areas where you're really looking at such wildly varying developmental goals for students, because kids are still, especially in ninth grade, 10th grade, you're still wildly different in their individual journeys that I think that having those four levels where you have a beginning, a developing, a proficient and an exceeds of some sort is probably needed in, in many cases.

Boz: And again, we've done whole episodes on this but it really is part of that grading architectural decision making that you need to go through. All so what's the next.

Sharona: So the next one is one that I feel that I have struggled with in that I use the same language for all of my existing standards. But that as I go more into the mathematical practices, I think I'm gonna wanna get deeper into each level, which is writing descriptive language for each level. So in the past I've been like, good enough means, it's just good enough. You've met all the expectations you're good at this thing. And then revise and retake very similarly. But if I go into, say, modeling, I might have some sub criteria where I want to talk about their specific criteria that they've put in their model and the restrictions and things like that, and I feel like I may need to get more specific about what is a revisable level of restrictions on a model versus a proficient level so, that students have more to hook onto in a specific skill of a practice that I'm trying to get them to.

Boz: I would, I don't wanna say disagree, but we've talked a little bit about this in some of our trainings and what I've always said is the more vague and generic your language on the rubric is, the more detailed your feedback has to be. And if you're willing to make that trade off so you know you can use a generic rubric for most, if not all of your learning targets, knowing you're gonna have to give more specific feedback, or do you wanna spend your time developing those rubrics? So the rubric itself gives more of the feedback to the student without you having to go further into it. So it's a trade off. I do think that you can get away with a little bit of of vagueness in your in your rubrics, as long as and understand, that means you are gonna have to spend more time with writing the specifics and the feedback.

Sharona: And I think that I have been willing to take the trade off to be in the feedback, and now that I'm gonna be really focusing on these mathematical practices that it would help my students to have that feedback before they try the problem.

Boz: Yeah.

Sharona: And that's the other distinction. If it's in the rubric and they get the rubric ahead of time, they're able to use the rubric in their work.

Boz: And I would say this one other point, if you are doing this in any kind of collaboration with other teachers, whether it's a coordinated course, the more vague it is, the more differences you're going to get in. In those scorings. So part of why we argue against traditional grading is it is so subjective. Yeah, if you're doing this in a coordinated course or with common assessments amongst multiple instructors that are scoring and teaching it you do need to, I think in my opinion, need to be more on the detailed side. Otherwise you are leaving it up to interpretation.

Sharona: And I wanted to talk about one example that we're gonna link in the show notes to be a little bit more specific about this. Okay. So we work with this amazing woman, Annie Ransom, and she's a middle school, high school science educator. I just did a whole PD series with her, and one of the things that she broke down is one of the science and engineering practices in the new NGSS standards is modeling. And in modeling, you have these sub criteria that you're supposed to be able to define criteria and constraints. You're supposed to generate and evaluate solutions. You're supposed to develop and model solutions. You're supposed to analyze data and iterate, and you're supposed to communicate and disseminate and document. So it's one learning outcome, in theory, that students can model, right? Yeah. But there's these individual dimensions and in each of these individual dimensions, she developed a four level rubric. And so you can track from the beginning, from the highest level to the lowest level or whatever, how these things are distinguished.

And I just wanna give you one example of the language. So for instance, on defining criteria and constraints. "Meeting the standard" it says The design presentation clearly articulates all criteria, and this is specific to a sound one. So she goes further and says, eg. Sensory translation and integration and explicitly addresses all specified constraints, cost, safety, time in the final design. Versus "approaching the standard". It says, clearly articulates the main criteria and addresses the majority of the constraints. So there was a difference between the majority and all, or main criteria and all, and that's what got you from approaching to meeting.

Boz: And that is one of the other things about what makes a good rubric. A good rubric is that parallel language. That use of, and where you can see, it going from all to most to addressing it to some degree, to loosely addressing that parallel language is another key aspect of good rubrics.

Sharona: Now, another thing that Annie does very well, and that is in our list of steps, is defining the top and bottom levels first. What's the best case? Where do you hit your good enough or good to great. Or whatever your top level is and what does it look like at the bottom and place the other levels in between.

Boz: See now, and that's something that you and I talked a lot about when we were doing, working with the engineering department at Cal State LA is not starting in the middle. 'cause that's where it seems like everyone wants to start, but yeah. Shout out to University of Nebraska. This is suggestions based on some of their work. So we've had a couple of guests from that area in the last couple of months. University of Nebraska is doing a lot of great things, but yeah they also back this up saying that. You should develop the highest level and the lowest level and then work towards the middle. From our experience, everybody when they wanna get away from points and then when they wanna start this, they wanna start with the definition of the middle. They wanna start with that C level.

Sharona: And I think in higher ed, partially it's 'cause we're so focused on pass no pass.

Boz: Yeah.

Sharona: And so that C becomes what's the bare minimum I will accept. To let a student pass. And I'm like, why are we targeting the bare minimum? Why don't we shoot for the moon? Let's get what we want that A to be and see if we can get everyone there. It's a much more positive place to live.

Boz: I also think there's something else to it. And we saw this with some of the different courses we've tried to help redesign is when you start with the middle. By the time you get done, if you really compare that to what would've been acceptable in the traditional when they were doing it in a traditional setting, it's much higher than what would've been required in the traditional. So they're saying, yeah, this is gonna be my minimum for passing, but the criteria actually gets really difficult and much more demanding than it would've been in a traditional.

Sharona: Because suddenly people are like I want them to be able to do this right. And this and the reality is, in traditional, because of the way the points work, you don't actually have to be able to do anything, most of the time, yeah. To get a C.

Boz: Yeah. And then what happens when you start in the middle and you inadvertently start at a much higher bar than what a traditional C would've been by the time you get to the A. Whether we're talking about

Sharona: graduate level students in a freshman class.

Boz: Yeah. Whether we're talking about the rubric or the wrap up. Yeah. The level of a is okay, nobody is getting this. Like you said, a grad student in a freshman level course might get it. Maybe.

Sharona: Exactly. Now, there is an important piece to all of this that I didn't yet articulate, although we've said it before in giving good feedback, is you want to avoid descriptor terms, adjectives, like good or excellent. You want to use specific, observable and measurable behaviors like "the presentation clearly articulates". Clearly is doing a lot of heavy lifting there. Yeah. The main criteria, and addresses the majority. None of this says that it does it well or that it does it, any of these sort of vague, judgmental adjectives, good, bad, excellent, whatever.

Boz: Yeah. And again, that especially important if you are dealing with doing this outside of just your class. That's where either through the vagueness of your descriptions or the use of adjectives, like good. Excellent. We talked about this earlier when we were doing our, sample grading of the OU students work. Your definition of good writing and my definition of good writing is very different. So yeah, if you're going to be doing this as a common assessment where multiple people are grading language, like good and excellent, either don't use it or you have to define what good and excellent is in that descriptor as well.

Sharona: And I remember running into this when I was developing my first rubric for the history of math because I got, in my opinion, unexpectedly, poorly written first assignments. And I couldn't figure out, these were seniors in college. Like why were they so poor? And they were engineering students. I wasn't expecting the next Shakespeare, but I thought they could write coherently. And so I got on the phone with Joe Zecola and I was like, I don't understand. I'm not a writing teacher. I write very well by most academic standards. I don't write fiction well 'cause it's not academic, but I can write a pretty good academic paper. I have no idea what to tell these students.

And we grappled it. It probably took an hour. We're grappling with it. We're looking at it. We're reading it. And he finally says to me how's the math in this? And I'm like it's utter nonsense. Like they don't know what they're doing. And he's that's why the writing doesn't make any sense. So he says, don't give them feedback on the writing. Give them feedback on the math. And he was right. It magically fixed the writing because the problem wasn't actually the writing. The problem was they were confused about the math. So it was a fascinating exposure. But if I'd had good writing as part of my criteria, it would've been a disaster. And instead he helped me develop language for my rubric that said things like "avoids major grammatical mistakes that changed the meaning of the sentence" and things like that.

Boz: Yeah, instead of good or coherent, there's some actual descriptors of what that means. Exactly. So there, there was one other thing that I will admit, I have never done this. And reading it and going back to some of my own experiences, it makes total and complete sense. And I think this is something that you and I, when we start doing some of our trainings, need to point out and make a big deal of. And that is, to test your rubric. This idea of testing your rubric. Take a small sample set, random sample set, or if you wanna go through and find one that just on, on a first reading looks like a bad a. Okay. A good and a great. Test your rubric, see if there needs to be any adjustments to it. If you've done this kind of assignment before in the past, use prior students' work, but test it out.

I will admit I've never done this. And what ends up happening, even in a math course, and I can imagine in a writing course or a history course, this would be even more is I end up giving, especially before I went completely away from points, I was giving halves, it was halfway between a beginning level and a novice level, or I was halfway between because they students would do things that I wouldn't imagine them doing and that it's okay, crap. This doesn't quite fit this definition of the rubric, but it doesn't, definitely doesn't reach this level. So testing it out and finding some of those things that, 'cause I guarantee you and anyone that's taught more than 10 minutes will be able to say the same thing. Students will do things that you don't expect them to do.

Sharona: Which is why, to be honest, my favorite rubric is the complete, not yet.

Boz: Yeah.

Sharona: And the not yet being a revise. Literally, unless it really clearly meets what I'm looking for, it's this huge bucket of revise. And I can just go to town on feedback and whatever and so only having the two levels is absolutely gorgeous.

Now there was one last thing I wanted to bring up and we didn't prep this part, so hopefully you'll indulge me. It is linked on the document. So we didn't talk about this, but there's an entire paper about different ways to introduce these rubrics to the students. We don't have time to go into them. We will link this article in the show notes. But I do wanna say that spending time with your students with the rubric before the assignment and having a robust discussion about the elements is one of the most effective practices to introduce these and get students to use them properly. And it's not something I feel like I've done a good job at.

Boz: Yeah. And it's one of those ways where you can help the students avoid going completely off the rails.

Sharona: Exactly.

Boz: Alright, we are running up on time, so wanna thank everyone for listening. Hope you've all had a great holiday season and a happy new year. This has been Boz and Sharona at the Grading podcast and we'll see you next week.

Sharona: Please share your thoughts and comments about this episode by commenting on this episode's page on our website. www.thegradingpod.com, or you can share with us publicly on Twitter, Facebook, or Instagram. If you would like to suggest a future topic for the show or would like to be considered as a potential guest for the show, please use the Contact us form on our website. The Grading podcast is created and produced by Robert Bosley and Sharona Krinsky. The full transcript of this episode is available on our website.

Boz: The views expressed here are those of the host and our guest. These views are not necessarily endorsed by the Cal State System or by the Los Angeles Unified School District.

Links

Chapters

Video

More from YouTube