Artwork for podcast The Grading Podcast
5 – Getting Started Part 3: Developing your Grading Architecture
Episode 515th August 2023 • The Grading Podcast • Sharona Krinsky and Robert Bosley
00:00:00 00:45:48

Share Episode

Shownotes

In this episode, Sharona and Bosley describe the four decisions that make up the "Grading Architecture" of an alternatively graded course.

  • How will you assess your learning targets?
  • What type of proficiency scale(s) will you use?
  • How will students show proficiency for a specific learning target? (I.e. how much evidence and of what kind to they need to provide)
  • How will you wrap up into a final grade?

After introducing the four decisions, they explore some examples of common options used for each of the decisions. This includes:

  • What types of evidence of learning will you accept?
  • Exploring three common types of proficiency scales - 2 level, 3 level and 4 level scales
  • Describing different methods for determining proficiency on a specific target (the Guskey method, N times, and decaying average)
  • Evaluating options for rolling up grades, including the simple count method, the bucket method, and the "good to great" method with multiple levels of proficiency.

Resources

The Grading Conference - an annual, online conference exploring Alternative Grading in Higher Education and K-12.

Some great resources to educate yourself about Alternative Grading:

Recommended Books on Alternative Grading:

The Grading Podcast publishes every week on Tuesday at 4 AM Pacific time, so be sure to subscribe and get notified of each new episode. You can follow us on Twitter, Facebook and Instagram - @thegradingpod. To leave us a comment, please go to our website: www.thegradingpod.com and leave a comment on this episode's page.

If you would like to be considered to be a guest on this show, please reach out using the Contact Us form on our website, www.thegradingpod.com.

Music

Country Rock performed by Lite Saturation

Country Rock by Lite Saturation is licensed under a Attribution-NonCommercial-NoDerivatives 4.0 International License.

Transcripts

Sharona: A factor that really needs to be thought about is the grade tracking of all this stuff.

Bosley: That, that's actually a really big factor and one that can be overlooked when you're new to this and first trying to put this together. You know, we've talked about some of the big mistakes. One of them is just making things too complex. Well, even if you've got a simplified version, you have got to have a way to track it.

Welcome to the Grading podcast, where we'll take a critical lens to the methods of assessing students', learning from traditional grading to alternative methods of grading. We'll look at how grades impact our classrooms and our student success.

I'm Robert Bosley, a high school math teacher, instructional coach, intervention specialist and instructional designer in the Los Angeles Unified School District and with Cal State LA.

Sharona: And I'm Sharona Krinsky, a math instructor at Cal State Los Angeles, faculty coach and instructional designer. Whether you work in higher ed or K-12, whatever your discipline is, whether you are a teacher, a coach, or an administrator, this podcast is for you.

Each week, you will get the practical, detailed information you need to be able to actually implement effective grading practices in your class and at your institution.

Bosley: Hello and welcome back to the podcast. On today's episode, we're going to be looking at what we call the grading architecture, which, combined with the four pillars, is kind of the nuts and bolts of how you set up your course for alternative grading. So, Sharona, what do we mean when we say grading architecture, what is that?

Sharona: Well, thanks for the question, Boz. I really like this question because I think, as far as I know, I'm the one who started using the term grading architecture so, welcome back to the pod everybody. So what we mean by grading architecture is the decisions that you make, the structure of the grade, for your course.

Bosley: Yes. Since, since we're not doing points in percentages and averages, how then do you come up with the end term grade?

Sharona: Exactly. And we have four decisions that we think people need to make to build a grading architecture. But, before we get there, we have a fundamental question to ask about the course and about that end of term grade, which is, what is the purpose of the end of term grade? We've talked about on the pod that we're assuming that most of us are in a multi-level grading environment. Yeah. That you have to give some form of an end of term grade, whether it's pass/ fail, A, B, C, D, no credit, whatever it is, it's, it's a multi-level grade. What's the purpose of it, in your opinion, of that end of term grade?

Bosley: Well, and, and that's actually a question that we start a lot of our trainings off with, you know, what is, what is the meaning and what is the purpose? I think, kind of the standard answer we get, it's supposed to communicate some sort of level of proficiency or mastery of the course content, which is a really generic answer, but in this context, we need to go actually a little bit deeper than that, don't we?

Sharona: I think so. And I don't think there's one answer. I think this is another one of those situations where it's not "what's the purpose of an end of term grade?", but what's the purpose of an end of term grade for this specific class in this specific institutional context in the placement of this class in a greater curriculum?".

Bosley: Like so many other things when we're talking about alternative grading and all the different, you know what, what I like to say the different flavors of it or the different degrees of freedom is, you know, really looking at your course. What are the end goals?

Is your course in a sequence of courses? Is it a pre-req? Is it, you know, a kind of a capstone of a sequence? Is it an elective? Like, what is the purpose of that course? Where does it fit into the overall makeup of a student's, not just current education, but future education as well?

Sharona: Exactly. So, you know, I teach, primarily, three different classes. I teach a general education statistics class, I teach linear algebra, and I teach history of math, which is a senior, junior/ senior level elective. And I would say the purpose of the end of term grade in those three classes is somewhat different. Although at the end of the day, most of them, for my purposes, in my institutional context, an end of term grade communicates what proportion of the material in the class has been achieved to a proficiency level. So I'm still at the moment, although this may change, an A in my class means proficiency at 90% of the material, a B is 80% of the material, give or take, a C is 70% of the material.

So that's kind of the context that I'm in, that I come through. I know that some of the engineering folks we work with it's a little bit different. A C means mastery of a core base content set. That these are the things they absolutely need to be successful in the next class. A B means that not only is the student mastered that core competency material, but there's some extra material that they've gotten, and an A means they've gone even more than that.

So,

Bosley: in that context, and I would actually argue with you about your linear algebra, but you know, an A actually means that the student is set up to likely be successful in the next course in the sequence, you know, regardless of, of which classes we're talking about, what's next in the sequence.

Whereas, the Statistics, the gen ed statistics class and the history of math, because it's not in a sequence, the A does mean something a little bit different, even though they're both based on the level of material that the student has shown proficiency in.

Sharona: Yeah, I would agree with that. And when I think specifically about the statistics course, yes, I want my students to have gotten mastery of 90% of the material, but what I am assessing in the statistics course is based on the fact that it's a gen ed course that is not a prerequisite. So I have more freedom to assess. A little bit more of the thinking skills and a little bit less of the strict procedural skills, and the course is designed for that.

han what we have in that math:

Sharona: Well, and I think that that first question of what's the purpose of the grade actually comes even before you design your learning outcomes. So in a previous episode we talked about designing learning outcomes, but you, or at least I, choose my learning outcomes in part based on the goal of the course.

Bosley: Alright, well let, let's back up a little bit because we just did kind of jump into the topic of the, of this episode. But I want to make it clear. So when we talk about grading architecture, we talked about that there are four decisions that make up your grading architecture that you need to go through. However, before you can start that, this is only done after your learning targets are, or at least the first draft or, or the first go at it, is done. Like this is something you don't jump into before you do your learning targets.

Sharona: Exactly. And if we want to extend the analogy, I would say that what is the purpose of your grade is sort of the overall, you know, what kind of a house are you building? Is it an apartment building? Is it a house? And then the learning targets are your foundation. Yeah. That's where you go. And now we have to make some additional decisions.

Bosley: All right, so what is that? What is that first decision? And these are actually kind of linear, even though you'll go back and forth in it. We're going to present these in what I think is kind of a, a necessary linear progression of what questions you ask.

Sharona: Exactly. So you would try to answer number one first before you go to number two, before you go to number three, before you go to number four. Exactly. So...

Bosley: What is that first question?

Sharona: The first one is, since you already have your list of learning targets, how are you going to assess them? How are you going to obtain your evidence of learning for that particular learning target? So we have an episode coming up shortly where we're really going to dive a lot more deeply into the different kinds of learning outcomes. But essentially, at the end of the day, we're looking for evidence of learning. So are you going to get it from quizzes? Are you going to get it from portfolios, from projects, from tests?

How are you going to, is it going to come from conversations with students? How are you going to get that evidence of learning?

Bosley: Yeah. And, and again, because our courses can look very different, the, this can look very different and, you know, you can have multiple ways of doing these assessments. But yeah, whether it's going to be projects, whether it's going to be traditional testing, quizzes, is it going to be portfolios?

You need to decide how you're going to assess the learning targets that you've come up with, because that's going to, and then turn around and help determine some of the next questions.

Sharona: And in other future episodes, we'll talk a lot about aligning those assessments to your targets because you really want good evidence for what the student's proficiency is in a specific learning target or learning outcome.

Bosley: Yeah. This episode is meant as an overview of, you know, the grading architecture, the nuts and bolts of how you do this. We will have future episodes that really, you know, a single episode on each one of these four questions and, and going deeper into why you need to do this, how you can do it, and hopefully even having some more guests on looking at different ways that people have done it.

Sharona: So the second one that, the second decision, that we need to look at is what type of a scale will you use in looking at your evidence of learning?

Bosley: Yeah. Well, once you've determined how you are going to collect it, a student has, you know, done one of these, how do you assess that assignment? How do you assess that project? Now in a traditional way, you know, this is where points come in. This is where the points start to come in. You know, you make it out of 10 points and you take off points. Again, that has all kinds of of issues that we brought up in our episode on what's wrong with traditional grading.

So if we're not going to use points, what are we going to do? How do we do it?

Sharona: So, typically, in a lot of the courses that we teach, there are three primary scales. There's what we would call a two point scale, meaning there's, a student is proficient or student is not yet proficient. So they've met the expectations or they've not yet met the expectations. That's actually the one that a lot of people use when they're more in a specs graded world. You might have a whole list of these learning targets and you have to get each one of the targets on a single assessment to a yes, you've met it spot.

The second one of these scales is a three level scale, and that's the primary one I use, which is, you've got the top of the scale, which is you've met expectations, you're proficient, or you're complete, or you're done, or you're an emoji. Because I use a lot of emojis. But it's one of those things, so you've met it, you've not quite met it, and you either need to revise or retake it, and then the third one is, You're not there yet, or insufficient level of evidence shown.

We just don't have any real information, you're either don't have enough information or you're really far away from the target and you're not yet there. So satisfactory, getting there, and not yet, whatever that means. And then the fourth one, the third one, excuse me, is a four level rubric or level scale. And that one is typically used when you have two levels of proficiency and two levels of not yet there. So you have a meets expectations or meets proficiency and an exceeds level. So it's like a good and a great. Yeah. And then you have a not quite there and not yet there. And that's a four level scale.

Bosley: Yeah. Now, and we're calling these proficiency scales. Some people might think of them as rubrics and they are similar. There are some subtle differences, which when we get into this, you know, episode that's dedicated just to this question, I'm sure we'll explore kind of the subtle differences between proficiency scales and rubrics.

But, for those of you that are listening, if you've never heard proficiency scale and you're thinking to yourself, wow, this sounds a lot like a rubric. Yeah, they're, they are very similar and some will even use these interchangeably.

Sharona: And the reason that we're moving more to this language is a lot of the LMS systems assign a rubric to an assignment, and so we're assigning a proficiency scale to a target. A learning target, and not to an entire assignment. So a rubric might be an assembly of multiple proficiency scales because it's on multiple learning targets. That's why we're trying to distinguish this language from itself.

The other thing I wanted to remind people, Boz, is that one of the big problems with traditional gradings is a hundred point scale means a hundred levels of discrimination of quality of work. And research has shown that human beings can't discriminate that finely.

Bosley: Yeah, we talked quite a bit about that in our what's wrong with traditional grading that the, the human brain... with any kind of professionals, we, we don't, you know, separate that many different degrees of precision.

Sharona: Exactly. So what I would encourage people is don't go more than five levels. And that fifth one is really a four level one with an added zero. Right? Zero meaning no evidence provided.

Bosley: Yeah. And, and we're not saying that you can't, but if you're going to go, you need to have a good reason why. Like even the AP Lit and Langs that used to have a nine level rubric on their writing prompts have even gone down to the five level, which is a zero through four, no evidence, and then a not yet, almost, proficient, and exceeds. So, not that you can't, but if you're going to go more than five, which is really the zero through four, you need to have real rationale as to why you need those, those extra levels.

Sharona: So, Boz, the, the third question that I want to kick over to you is, we've talked about how will evidence of learning be found. How will you assess learning targets? What type of proficiency scale? Now, how will the students actually show, ultimately, that they've achieved that learning target.

Bosley: And, and what's interesting about this particular part of the architecture is that it's the one that's completely missing in traditional grading. So I have found especially with new practitioners, this is the hardest one to kind of hammer out because with traditional grading, you put points or percentages on you know, your assignments and then, your end grade is made up of some sort of collection of all of those points or percentages, whether it's a straight average or a weighted average, but you don't have this middle step.

So this one is unique to grading systems that are not traditional, and that is because we're looking at individual learning targets. How do you know when a student has achieved proficiency or mastery of that learning target.

Sharona: And so the first one is your favorite, so I'm gonna let you discuss it.

Bosley: Yeah. The, the first one is taking the whole body of evidence of however many assessments or assignments you've had on that learning target and looking at the overall pattern. You know, putting, definitely putting more weight towards the latter assessments or assignments than the beginning ones. But looking at the, you know, the whole breadth of evidence that you have. And Dr. Thomas Guskey and Ken O'Connor and several others, you know, they've written quite a bit about this. In fact I like to refer to this as the Guskey method. But yeah, looking at and using your professional judgment over any kind of analytical tool or any kind of algorithm and just looking at that pattern.

Sharona: It, it's almost like creating a narrative of the student that results in a final grade. So in that first episode, one of those first episodes we talked about Ken O'Connor's parachute packing problem. Mm-hmm. And there were the three students, one of whom started off strong and just petered out throughout the semester. Another student started very weak and just got stronger and stronger and stronger. And another student who was scattered all over the board, and you can come up with a final end grade for each of those students by looking at those patterns and telling that narrative story.

Bosley: Yeah, and, and also we gave that example to showcase the problem with using an average, because all three of those students mathematically had the same average, even though they have very different stories. And if you look at the, you know, overall pattern of those three students, it's very easy to see, you know, which student deserves the higher grade, at least on that learning target.

Sharona: Exactly. Now the second method, which is the one that we primarily use is called N times. Which means that on a certain number of assessments, the students must show proficiency.

Bosley: Yeah. So if you have, you know, let's take our gen ed statistics class as an example, our students have five attempts to show mastery of any one of the learning targets. Well, we define it as you have to get it twice, don't care which two times you do it, you get it twice, you've shown us that you're proficient enough in that learning target that we can count that learning target towards your grade.

So whether it's the first and second attempt, whether it's the fifth, you know, the fourth and fifth. If it's the second and fourth, you know, doesn't matter. It's just a predetermined amount. That is key. It is a predetermined amount of attempts, out of the attempts they have, how many times do they have to get it on that single assignment?

Sharona: And this is definitely an interesting balance. I know a lot of instructors who've tried one time, and they felt they got a lot of false positives. Students who just accidentally got it. We tend to use two times for that very reason. For that reason. I know some people who've tried to do three times, and then when you do the math, if you're in a 15 week semester, they have to show it three times, you have say, 25 learning outcomes. You're going to have probably five or six assessments. So 25 learning outcomes times five or times six. That's 150. 150 of them per student that you have to make available. Now you've got a class of 25 or 75 students. It, it very quickly just gets unwieldy. So most of the time we see one or two, or sometimes you have to show it on different types. Like maybe you have to show it once on an in-class assessment and once on an untimed assessment or once in a portfolio place. We've seen that as well.

Bosley: But this, this also goes back to what I said earlier of why you can't make these decisions until you've got your learning targets. That doesn't mean, you know, as soon as you start trying to define your grading architecture, that your learning targets are set in stone and you can't change them. Of course, you go back and revise, add, subtract, but for your grading architecture to make any sense, you need to have an idea of how many learning targets you have. You know, you need to know what, kind of, what those are. You know, if I have, if I only have five or six learning targets, then maybe requiring it three or four times isn't unreasonable.

But like you said, if I've got 25 of them, unless every single assessment I do is, you know, measuring three or 4, 5, 6 of these learning targets at a time. Yeah. I just buried myself in grading.

Sharona: And that's why we say you have to even do these, or at least start these grading architecture decisions in order, because in order to know you're going to do it twice, you have got to know what they're doing it on. I mean, if they're doing big, massive portfolio projects, there's only time for so many of those in a semester, so maybe you can't require it twice.

Bosley: Exactly.

Sharona: Or maybe you're giving daily quizzes and therefore maybe three, four times as reasonable because you've got a, a class that meets five days a week in a K-12 setting.

Bosley: Mm-hmm.

Sharona: You know, it might be much more reasonable. So these decisions are somewhat dependent. Again, you're gonna iterate your way through these decisions.

Bosley: Yeah. You'll, you'll go back and forth and, and make revisions. And...

Sharona: It occurs to me that this also comes down a little bit to philosophy. Because one question I often get asked is, if a student shows mastery twice by week five of the semester, and I'm not requiring them to do it again on the final, how am I confident that they still know how to do it?

And the answer I give is, I'm not. I'm not confident, but I'm confident that once they've done it once they can get it again pretty quickly. I mean, the, the common joke in our field is when does somebody actually learn calculus? And the answer is when the first time they teach it.

Bosley: Yeah.

Sharona: So if we're not holding even ourselves accountable for really learning this material in a way that's going to stick forever and ever and ever, I personally am okay if they got it once and they need it again, they'll go back and get it again. And I'm okay even if that's in week five and they don't have it by week 15.

Bosley: And, and, but again, that goes back to what you started this episode off with was, you know, what is the purpose of your course and what is the purpose of the grade of your course?

You know, if I'm looking at Calculus, knowing it's a foundational course to pretty much any other math course, whether you're math, science, or engineering. I want to make sure that those skills are a little bit more hammered in, because I know they're going to see it again and again. You know, in physics they're going to see it, in, you know, strengths and materials or whatever.

Whereas our stats course, our gen ed stats course, it's not a pre-req for anything.

Sharona: Right.

Bosley: Like, so the, the purpose of this course and the meaning of those grades are different. So that is going to change. Like I would not do the same grading architecture, you know, for a Calc 1 class that I do in this you know, gen ed statistics class.

Sharona: I completely agree with you. I completely agree. And there's a third common one. Having made the mistake of using this one, I don't like it at all. And this is our podcast, so I, I get to say that, I do put it in there in part though, because one of the things we want to talk about in a minute is the way that the tools we have access to is going to, is going to influence some of this.

But the third one's called a decaying average. So this is where you take your proficiency scales, which I don't even like putting numbers on scales at all, I like language and emojis, but you can put numbers on them and then you can do math on them. And so the decaying average basically says the stuff earlier that's assessed earlier is weighted less and the stuff later is weighted more. So basically as a student continues to provide evidence, the later evidence is more valuable.

Bosley: Yeah. Which sounds similar to the Guskey method but with one big difference. In a decaying average, let's say I've been doing well, you know, on, on these different assessments throughout the, throughout the semester, and then the last time you assess it, you know, I, I woke up, my dog bit me. You know, I don't have any milk for my cinnamon toast Crunch, I missed the train. By the time I get to your class to take this, this test, I am in no state of mind to be able to do this, and I bomb it. Well, you know, in the Guskey method, as the professional you can look at that pattern and you can recognize that was just an outlier. But with the decaying average, that day happens at the first assessment, no problem. That day happens on the last assessment and it will tank the grade.

Sharona: And as I said, this is one of the ones that's built into a lot of the learning management systems, we call those LMSs. In there, there's another, there's some other ones you where it's like the highest two grades, you can take those two, but simple is good. Simpler, the better. And we have entire episodes coming up on simplification of these systems. So those are the three most common ones. Now, again, I would say that the Guskey method is probably a little bit more of an umbrella form, where a lot of people, by the way, none of these methods are we saying that only the instructor has to decide?

Bosley: That's an, that's an interesting point. That's right.

Sharona: Because you know, when you think about some of the more ungrading versions of this stuff, the Guskey method, where the student is looking at the overall body of evidence, is what I would sort of argue is the fundamental to an ungraded method where they're still assigning a final grade.

So you can do a lot of these, these wrap-ups, these rollups of how to achieve proficiency or in a learning target, and it can be the student looking at their own work. You could even do it with N times because you can hand out an answer key and have a student self-assess their answer against the key and decide whether or not they think it's a proficient answer. So this, this can go definitely in multiple ways. This isn't just the instructor as sort of god on high.

Bosley: And, and what, what's even, I think, more interesting is not only can the students do it for themselves, but there's a lot of realistic and practical settings where the students can do it for each other. That peer review, that peer editing and assessment.

Sharona: And it was occurring to me that even the things I don't like, like decaying average is not my favorite. However, what if you are a licensing prep course, say in surveying, we are working with an engineering professor who does the surveying course. Wouldn't you... perhaps it's an option that the final exam is the most important, most weighted thing because it's the thing that happens right before they go to take their licensing exam. So it, it's essentially extra incentivized in the grade, in part because you want them to feel the pressure for making sure they're ready, because they're going straight into the licensing exam. It's a thought.

Bosley: Yeah. And, and it goes back to that original thing. What's the purpose of the grade,; of the course and what's the meaning of the grade? Yeah. If you're doing, you know, some sort of, of licensing, you know, prep or your, your class is typically the last class they take before some sort of, Licensing test, then there might be an argument, there might be a rationale or a reason to want to do, you know, a decaying average.

So even though it's not one that you and I personally like to do, there's definitely cases where it's probably the most appropriate one to do.

Sharona: Yeah, and I'm thinking about like a community college context where failing the course is a lot less expensive than failing the licensing exam. Yep. So it's, you know, yeah.

Looking at it,

Bosley: those are expensive.

Sharona: Yeah. So really looking at the full multidimensionality of the whole thing. Okay. So we've done the first three decisions. Mm-hmm. How are we gonna assess an individual learning target? What type of proficiency scale are you going to use? How will your students demonstrate, like, what evidence do you need for a specific learning target? Now we come to the fourth decision. The final decision is how is this all going to roll up into a final grade?

Bosley: Yeah. And in traditional grading, you know, this decision is usually pretty mindless, but it's there, you know, it's you, you define your A as 90% to a hundred and your B as 80 to 89.9 or whatever. So it's the same kind of idea, but because we don't have points, we don't have percentage percentages, we don't have weight categories. How do we then take this collection of learning targets, which ones the students did and didn't get, and determine that end grade, whether it's pass fail, ABC, F, you know, how, how do we do that?

Sharona: So we're going to give three main different ones.

Bosley: We, we like the number three here.

Sharona: We do like the number three, although I will say, I would add to this one the ungrading one, which is just in consultation with the student and against a, a set of final rules. Yeah, you can do that one. The one that I use the most often is a simple count. So if I have 28 learning outcomes in my linear algebra class, 26 or more of them at a proficiency level is an A, 25 is an A-, 24 is a B+. So I just go straight down the line with mine. Now, why 26 outta 28? Well, honestly, I'm kind of defaulting back to a little bit of a traditional grading system and saying, well, that's 90% of the content, give or take.

So these are some of the things that maybe I could use a little more examining on, but I do like that the simple count. This ignores a lot of the nuance, but it does simplify the communication and it gets a lot of buy-in from students.

Bosley: Yeah. So it doesn't matter which of the learning targets they, they do and don't, it's just the number of learning targets they get.

So my, like you said You know, I, I do a course that's got 15 in it. Last semester I did one that had seven in it. So yeah, the A was, you know, with the, with the 15 one, the A is 13, no, that's not quite 90%, but that's because looking at the learning targets that I had in there and what I thought was efficient or, you know, efficient for defining those letter grades. Like what did an A mean for me in that course? What did I think it that A should message and taking those answers and then looking at my learning targets and going, yeah, they, they can, they can miss two of these 15 and still have an A.

Sharona: Exactly. And even in my linear algebra where it's just, I don't care, it's any 26 of them. The reality is in order to get 26 of them, you have to know most of the content. And even, even getting, I think it was 19 maybe to pass. In order to get any of them, you needed the foundational ones. Yeah. So it was fine.

Bosley: So, all right. So that's one method. That's just the simple count. What, what's another way that we could do this?

Sharona: So another way that this is often done we've seen it done a lot, and this makes a lot of sense in a lot of context, is the bucket method. So you're somehow going to break your learning outcomes up into groups.

And I've flirted with this at various times, but you could have a distinction between, say, content outcomes and practice, discipline practice outcomes. Those could be buckets. You could bucket things by, well, these learning outcomes are absolutely critical to success in the next course. So they form a bucket and that's a mandatory bucket.

And then you can have an expansion bucket of "and this stuff is really great to have, and if you get it, you're going to get a higher grade". That's another kind of thing you can do.

Bosley: So, looking at kind of an example of that one. And we do, we've worked with a, a group of professionals that do it this way where they have a set of learning targets that they've actually gone and talked to the professors in those, you know, later sequence courses. Because these are courses that are beginning of a sequence of courses and ask them, you know, what is it that your, you know, our students need to know going into your class to likely be successful.

Okay. Those are those skills, so in my class that I'm teaching, my C is defined as you have to have all these foundational ones and then my B is, you know, you have to have all these foundation and a couple of these extra ones. And then an A is all of these foundations and you know, maybe four or five of these extras.

So defining your buckets as, you know, foundational and some sort of expansion or additional. So that's another way of doing it. A, a third way.

Sharona: Well, before you go to that, I was going to say you, you've also worked though with some people who do more survey courses, right? Don't they have a different type of bucket than they do?

Bosley: Yeah. And that's what I was gonna bring up and that's breaking it up into content buckets. So again, like you said, I, I, I see this a lot with, especially the survey science classes in the high school or the freshman level college ones. I actually did this with, last time I taught algebra two, where you break up, you know, in context or in content. So my algebra two buckets, like I had one for graphing, I had one for, you know, solving equations. And then so on and so on. And then my C was defined as you had to have so many from this bucket, from this bucket, from this bucket, from this bucket. You know, my, my B you had to have, you know, so many from ... And to just define each of those letter grades by how many of the learning targets in each bucket. Again, I see this a lot with the science. You know, you, especially the survey high school ones or freshman college where they have such a huge breadth of, of content that they have to cover. You know, you have, they have to cover evolution and genetics and biodomes and, you know, all this other stuff.

So they break those content standards or learning targets into those buckets. And then the, the C is, okay, I've got, you know, three of them in the genetics bucket. So they have to have at least one of those, and I've got five in the biodome. So they have to have at least three of those and just breaking it up by content that way.

Sharona: And I think if I ever got to teach calculus two again, I think I would do that because in my mind, calculus two, at least the way it's taught at at my university, has three content buckets. It has advanced integration techniques, is what I call it. It has sequences and series, and it has motion in space. And so I would want my students to have at least some proportion of all of those, especially the later it goes on into the semester, the harder other classes get and they can start to slack. And I don't want them all to always skip motion and space because that's going to hurt them when they go into their physics class or when they go into some of their other classes.

Bosley: Yeah. And that's, you know, again, that Calc two class is definitely a course in a sequence like, and, and. They're going to need some of that information, so you're, in later courses to be successful, so you're right, just having students, you know, skip out or, or blow off one of those later buckets. Like what?

What did you say? Motion in something.

Sharona: Motion in space.

Bosley: Motion in space could be very much a detriment to them in a physics course later down the road.

Sharona: Exactly. So, So then the third way that we typically talk about though wrapping up the letter grade is by actually using multiple levels of proficiency on the proficiency scale.

Bosley: Yeah. So for this one, you really do have to have, you know, at least two different levels of proficiency in your proficiency scale or rubric. When you're do talking about your third decision or your second decision, sorry. And that is, this method is looking at and defining, you know, the A by getting so many "goods" and so many "greats" or so many "goods" and a minimum of, or maximum of so many "non proficient" scores.

So maybe the C is, you know, if I've got 10 of these things, you have got to get at least seven 3's or seven "goods". And then my B is, maybe you need, you know, eight. But out of those eight, you know, two of them have got to be at the great level, and none of them can be at the one level, you know, so you're defining your letter grades based on those different levels of proficiency on the assessments.

Sharona: And one of the things that we see as people make these decisions, and a factor that really needs to be thought about, is the grade tracking of all this stuff.

Bosley: Yeah. That's actually a really big factor and one that can be overlooked when you're new to this and first trying to put this together. You know, we've talked about some of the big mistakes. One of them is just making things too complex. Well, even if you've got a simplified version, you have got to have a way to track it.

Sharona: And one thing that I worry about with some of these rubric scores and other kinds of things is whether or not the technological tools that you have access to support them.

So even something as simple as, well, you need seven at a "good" level, and then for a B you need eight at a "good" level of which two have to be "great" and none can be "not yet", or whatever it is. Well, trying to track just that, on 75 students becomes very hard.

Bosley: Yeah. And if we're talking about the, you know, K-12 world, we're not talking about 75 students. We're talking, you know, 150 to 200 students. And we're talking about likely having to do that two to four times a semester because we have to do interim grades, you know? Exactly. And my district, you know, we do it every five weeks in a 20 week semester. So I'm turning in grades eight times in the course of a year, four times in the course of a semester.

Sharona: So we've already talked about the fact that we're going to have to have a hacking the grade book episode. Yep. But this is an area where I'm going to strongly encourage you, the simpler you can keep it, the first time, the better. The simple count is the most straightforward way to go. It's very easy to communicate to students.

Or maybe a count within buckets. Most of the systems that I'm aware of that support this type of grading at all support the simple count or the simple count of in, in buckets. Yeah, canvas, I know, does both of those I think Schoology probably does those.

Bosley: Actually. They not do the count. Not as much. It does the counts. It's harder to do the buckets. I mean, I, I've heard of someone saying that they found a way to, to kind of hack the Schoology LMS to be able to do it. I haven't seen it actually yet, so.

Sharona: And a lot of the LMS systems don't support it at all. So there are a ton of people out there who've developed technical tools to support grade tracking.

If you're thinking of doing this with your course, highly recommend you get in touch with people within the community who might have tackled this exact problem. Because I think all of us at some point have tried Excel, word mail merge, Google Sheets, Jupiter notebooks, Python scripts, you name it, we've tried it.

Bosley: It's great if you can use your LMS system. And it, or it's great if you have a lot of these technical skills. You know, I've been using Excel and, and, you know, for 20 plus years and, you know, Google Sheets, I wouldn't call myself, you know, a master at it, but I'm pretty dad gum proficient at these. But you don't have to be, I mean, you know, you, you can find someone that is, or like you said, we've got a ton of these in our community.

away, not last year, but the:

So these tools are out there. I if, you know, if you're proficient with, with some of them and you want to create your own, great. If you're not, don't run away. There's, there's still, still.

Sharona: Steal them. Steal, yeah. One of our favorite s phrases around the .Alternative grading commuting is stealing the hubcaps.

Yep. So we steal everybody's hubcaps.

Bosley: Well, my master teacher, when I did my student teaching I love her to death, Miss Whistler. Yeah. She used to tell me that the best teachers are great thieves. Like anything you see, that's good, steal it.

Sharona: Exactly. Exactly. So we've had the four pillars, clearly defined learning outcomes, assessment of mastery, eventual mastery, and helpful feedback.

We've now had the four decisions of grading architecture, which is "how to assess your learning targets", "what type of proficiency scale will you use?", "How will a student show evidence of learning?" And "how will you wrap it up?" These eight things are going to interplay throughout all of these alternative grading conversations.

Yeah. The, the four pillars is kind of the, the philosophy behind it and the grading architecture is kind of the nuts and bolts, but they, they absolutely go hand in hand. We've already seen how, you know, you'll go back and forth from that first pillar, which again, there's a reason why that's the first pillar is, you know, clearly defined learning target.

Like this is the key thing, like you cannot do anything else with before you get those. And it's again, one of the three biggest mistakes that I see new practitioners doing is trying to shortcut that step. But yeah, these, these two things are going to go back and forth with each other, but, and we're going to be going into much deeper conversations about not just the grading architecture that we've talked about today, but also those four pillars and see how those kind of interact with each other.

Bosley: So I'm really excited to see what we've got. Coming up, I think we're going to... this is going to be a fun journey. Please share your thoughts and comments about this episode by commenting on this episode's page on our website, www.thegradingpod.com. Or you can share with us publicly on Twitter, Facebook, or Instagram.

If you would like to suggest a future topic for the show or would like to be considered as a potential guest for the show, please use the Contact us Form on our website. The Grading podcast is created and produced by Robert Bosley and Sharona Krinsky. The full transcript of this episode is available on our website.

Links

Chapters

Video

More from YouTube