Artwork for podcast The Grading Podcast
109 - Live from MathFest 2025: Trends, thoughts, and directions
Episode 10912th August 2025 • The Grading Podcast • Sharona Krinsky and Robert Bosley
00:00:00 00:51:48

Share Episode

Shownotes

Sharona sits down with MathFest 2025 attendees Drew Lewis, Jennifer Moorehouse, Lipika Deka, Jennifer Narang, and Cory Wilson to discuss what we heard on the ground at this year's conference. From people presenting their experiences to a variety of data and forward looking opportunities, Alt Grading is making the switch to mainstream!

Links

Please note - any books linked here are likely Amazon Associates links. Clicking on them and purchasing through them helps support the show. Thanks for your support!

Resources

The Center for Grading Reform - seeking to advance education in the United States by supporting effective grading reform at all levels through conferences, educational workshops, professional development, research and scholarship, influencing public policy, and community building.

The Grading Conference - an annual, online conference exploring Alternative Grading in Higher Education & K-12.

Some great resources to educate yourself about Alternative Grading:

Recommended Books on Alternative Grading:

Follow us on Bluesky, Facebook and Instagram - @thegradingpod. To leave us a comment, please go to our website: www.thegradingpod.com and leave a comment on this episode's page.

If you would like to be considered to be a guest on this show, please reach out using the Contact Us form on our website, www.thegradingpod.com.

All content of this podcast and website are solely the opinions of the hosts and guests and do not necessarily represent the views of California State University Los Angeles or the Los Angeles Unified School District.

Music

Country Rock performed by Lite Saturation, licensed under a Attribution-NonCommercial-NoDerivatives 4.0 International License.

Transcripts

- Mathfest:

===

Boz: Hello and welcome to the Grading podcast. This week's episode was recorded live on location at Math Fest in Sacramento. So please sit back and enjoy as Sharona sits down with five of our colleagues from the conference as they talk about alternative grading and their experiences at Math Fest.

Welcome to the Grading podcast, where we'll take a critical lens to the methods of assessing students', learning from traditional grading to alternative methods of grading. We'll look at how grades impact our classrooms and our students' success. I'm Robert Bosley, a high school math teacher, instructional coach, intervention specialist and instructional designer in the Los Angeles Unified School District and with Cal State LA.

Sharona: And I'm Sharona Krinsky, a math instructor at Cal State Los Angeles, faculty coach and instructional designer. Whether you work in higher ed or K 12, whatever your discipline is. Whether you are a teacher, a coach, or an administrator, this podcast is for you. Each week, you will get the practical, detailed information you need to be able to actually implement effective grading practices in your class and at your institution.

Sharona: Hello everyone and welcome back to the Grading podcast. This is Sharona, one of your two co-hosts, and I'm really excited today because this is our second annual live recording at Math Fest. It's not coming to you live on the pod, but we're recording this live and I'm in the room with five amazing people, so I'm going to have them introduce themselves real quick. If you could just give your name and your institution that would be great.

Drew Lewis: Hi, I'm Drew Lewis and I work with the Center for Grading Reform.

Cory Wilson: I'm Cory Wilson. I work at Oklahoma City Community College.

Jennifer Moorhouse: I'm Jennifer Morehouse and I work at Hartnell college, a two year school in Salinas, California.

Lipika Deka: Hi, I'm Lipika Deka and I work at California State University Monterey Bay.

Debbie Narang: Hi, my name is Debbie Narang and I'm at the University of Alaska Anchorage.

really amazing. So Math Fest:

And so later in:

're seven years out from that:

Drew Lewis: So one thing I noticed, I was having a conversation and I just made an offhand comment about alternative grading, and he said, oh yeah, you know, grades are bullshit. And so just the way that this idea has kind of sunk into like, yes, it's just a thing that people do. It's not some weird, crazy thing anymore. It's just something in the water that, that people do. So that's a pleasant change from in the past where just, oh, there's some weird special session over there .

Sharona: Well, and to clarify what you mean is alternative grading is just something people do. Not grading because we all do that, but, but it's actually acceptable now to say grades are bullshit to someone that you don't know what their position is.

Cory Wilson: Hi, this is Cory, one thing that I noticed was like what Drew was saying in presentations that were explicitly about alt grading, there was a taken as shared understanding of what people were doing in their classrooms. And I think that's great to see that there is a baseline familiarity, at least at mathfest, for alternative grading methods.

Jennifer Moorhouse: Hi, this is Jennifer. One thing that I noticed was I, you know, I didn't go to all of the alternative grading talks because I was I've been doing it for a while. I wanted to explore some other things. So I went to modeling in differential equations, and so many of those talks, people were saying, oh, and I use standards based grading, or I use specifications grading, or, you know, it's, it's not just. We had this core group and it's expanding outward and it's just being taken as almost a given, right? Like they didn't stop and explain what standards based grading was or what specifications grading was. It's just assumed that everybody knows what it's.

Lipika Deka: Hi, this is Lipika and I agree with what others are sharing, but at the same time I'm also seeing that all, we had two very long sessions on alternate grading and all of those five, five hours of sessions were highly attended. And other thing I saw that people this time are not just coming with what they're doing, they're also sharing data. What because I have a grant where we're doing standards-based grading for pre-calculus and calculus and we're looking at a lot of data and we're trying to understand the data. And here I saw other people having similar data, which suddenly like, oh, now that makes sense. You know? So I think that's similarity and similar challenges that others are seeing here seems like so helpful that we can take back. And other thing I wanted to mention also, there is large interest, not just from instructors anymore, also from funders, you know, foundations like Gate Foundation and all our looking at assessment and how they can help with that to improve student success in first year courses. I feel like that's a big movement if we're able to now make also funders big organizations interested in how they can support alternate grading.

Debbie Narang: This is Debbie. I don't have too much to add. I almost wish we could rename this whole topic because I think alternative grading is not so much an alternative anymore it is becoming a mainstream option. And I definitely saw a lot of sessions about alternative grading in more formal ways, or even some that I think are more informal, but moving towards something that one would call, say, standards based grading or specifications grading.

Lipika Deka: So I also wanted to share for us at CSU Monterey Bay and at Hartnell College we actually were inspired by the grading conference. It came out of the MathFest collaboration. We attended, I think the very first one and the second one, and that was the time in our department. We were looking at our precalculus calculus. To see how we can make more improvement. We had already tried different things and our focus somehow was can we change something about the assessment, you know, assessment model that we have been using, which was, even though we had active learning, complex instruction, reading apprenticeship, all of that going in, but we were still facing challenges. And only thing that we hadn't touched yet was the assessment model. And the grading conference came in and a bunch of us attended and we were inspired by it and we wrote two learning lab grants to help with converting our traditional precalculus and calculus assessment model. And we were funded by the California Learning Lab and we're able to now at the fourth year of calculus and second year of precalculus and we're seeing amazing outcome coming out. There's still long way to go.

Drew Lewis: That actually reminded me that there were several talks here at Math Fest that I heard about alternative grading, where they started with, oh, you know, I learned about this at the Grading Conference. Right. So it's, it's really nice to hear that it's having an impact and, and getting folks to try this, and it's sticking enough that they're doing it long enough to actually come back to MathFest and talk about their experiences.

hink it was the second one in:

implementing executive order:

And they said, yes, please. So my role on the panel, to some degree, was to make sure that we were starting with language, because out of 5 panelists at least 4 of us have done standards based grading and ungrading in entry level math courses. So that was amazing that this panel got assembled, not with grading as its focus. And yet four of the five people on an assessment and placement, how to do assessment and placement for equity specifically, was the panel experience. I'm kind of mind boggled that it's only been seven years. So that's been a pretty amazing thing to hear.

So have we heard anything negative about alternative grading while we're here?

Jennifer Moorhouse: I think one of the common themes is that there are a lot of tweaks that need to be made in order to keep students on track. Communication being really important. Many people talked about students, they were doing well, but not having passed any outcomes or very few outcomes. And then coming to the end of the semester after procrastinating all semester long and trying to pass 15 outcomes or whatever they needed to pass the class. So I think that's been one of the themes for me is just figuring out how to keep students on track. And I think that's something that until students really understand the system, until it's embedded within the high schools as well and students come to us knowing how the system works, I think that we all need to do a better job of training students in the model.

Cory Wilson: One of the other things that I had heard was people struggled with the administrative component of implementing any sort of alternative grading, working with spreadsheets, working with an LMS, well working with or fighting with an LMS deterred them from continuing with any sort of alternative grading system. So might be nice if there were something different, but administrative issues were, I think I heard several times.

Lipika Deka: One thing I have been hearing which I saw in our data and concerned us, was there is a, so you're getting, when you look at the data for passing rates, which is a big, big thing for buying in and convincing administrators and departments and institutions, is that we're seeing a lot more A's and Bs, but at the same time we're also seeing more Fs. And we were seeing that in our calculus data when we first started it in the first two years. And then it sort of started getting better. And then we're seeing the same thing in precalculus. Like, we have lot more Fs. So, or the number of F percentage didn't go down. So now the question is why is that happening? When you have more A's and B's but we also have more F's.

So one of the thing I heard in many talks is there is no partial credit in the standard based grading. So that means to, a learning outcome, you have to 100 percent understanding of that. Whatever that means for different instructors. So that means a lot of time the student who passed C or a D in a traditional system probably is getting that through a lot of partial credit, which are now going away. So they're basically converting into your F's, bumping your F's. So that I heard from couple of young instructors trying it out and not knowing where do they go from here? They like it, but then the numbers are not encouraging particularly with respect to the failing percentage.

So what I wanted to mention is it's something that we're seeing different from precalculus to calculus. Calculus in its fourth year going into fifth year is balancing that more. That means now you're seeing actually the F numbers going down and the other numbers going up. So we're we, what we're saying now is, oh, maybe we need to just be patient and give time and adopt a few things. Like one of the things that some people are talking about that we're also concerned about is attendance. How is attendance impacted by standards-based grading. Most people say they didn't see a difference. But then we're also seeing attendance be impacted by other external factors, which then is impacting the grades the students are earning. So there are tweaking that needs to happen. But if we're patient and we work through it, maybe we could see changes on those F percentages.

Sharona: I have seen the same thing that you've seen that we're having an issue with the DFW rate. Like that doesn't seem to be budging as much as we would hope or expect with these grading reforms. But what our data is showing, because we do have, again, that ABF distribution, is that number one, the equity gaps have closed. So who is failing is no longer based on your race, your gender, your Pell-eligible status, or your first generation status. That's what we track. So we've at least done that. We've at least made it more about the individual student and not their historical background and income and things like that. And what I don't have the data for yet, but I'm hoping to get is I did get a dashboard from our institutional effectiveness research arm that is showing us how students do in a subsequent, like the second semester of pre-calc based on their passing grade in their first semester.

Because of course, if they failed, they don't go onto the second semester and the high number of students who got a C in the first semester failing in the second semester is a big issue. What I would be curious, and what I'm hoping to try is if I can start to pilot standards based grading in our pre-calculus, which I have not yet been able to do, if we see that increase to As and Bs, does the pass rate for those As and Bs maintain what it was before, because the students who got B's are passing at reasonable rates. They're maybe not as high as I'd like, but they're passing, at reasonable rates, the second course. The students who are scraping by with a C in a traditional graded class, I have fail rates that are 70% in the second semester. So is it that a C grade is set at the wrong bar? Or is it that its this mix of students, which is what I suspect.

My hypothesis is students who get a C in a traditionally graded class are one of two groups. If they'd had the right opportunities to work on their knowledge, they could get Bs and As. That's one set of students. And the other set of students is just gaming the system. And that standards based grading is separating those two groups. So if that's true, and if the students who just needed a system that was more forgiving of mistakes and actually based on learning from mistakes that they do get the content knowledge they need for subsequent course success, I think that would be a very powerful thing. Now that I have the data dashboard, I just need the pilot course to do it on, because the statistics course that we run it's not in a course sequence, so there's not a place. The data that we do have indicates that they do better. That they take an additional statistics course more frequently and that they do better than the general population. So I suspect my hypothesis is true, but I don't yet have the data in a sequential course. So I would be thrilled if any of you are doing it in sequences, I think Lipika you were saying you guys might do it. So I'd be curious to hear your thoughts on that.

Lipika Deka: We have done pre-calculus now Is using standard based grading. Calculus is using standard based grading. Calc two is using standard based grading. So our next step is to track our student who comes from standards-based grading in precalculus to calculus to calculus 2, and then they go off to learn standards-based grading. So we have vision of tracking that and have some data in the next few years. But I wanted to also mention adding to that because in our precalculus class, both at Hartnell and CSU Monterey Bay, we did some student focus groups to understand how are they feeling about this new assessment model, which is in its first year for precalculus. So we were very curious about what happens at focus groups, where we actually did it across all of our 11 sections. And it was done by our Teaching and Learning Institute. The instructors and the grant PIs were not involved in it, so we really wanted it to be a really authentic experience of the student.

So what came across? So we're a Hispanic service institution, so we have a very large percentage of first generation URM student. And obviously about 30% of our students are Hispanic. Background. So what we're seeing here, that student overall has very positive experience with standard based grading. So they talk repeatedly about it helped to reduce their anxiety or stress, they use the word stress a lot reduce their stress about getting it right that one time in a traditional system that they get to do well in an exam. So they talked about that. And they love that they could learn from their mistake and go back and try it. It gave them flexibility because a lot of our students are full-time working. They have challenges with family time and also many things.

Other point that I wanted to make here is that student also talked about how initially standard based grading make them feel because they have so many opportunities to go back. But the same group of student that we were trying to make a difference also struggled with finding that extra time to retake. They don't have extra time to retake. So one of the thing we did at CSU Monterey Bay to address that is all of the reattempts were done in class time. Again making it equitable so that not some people have more opportunities and some has less.

So we're doing that in our precalculus class. That everyone has the same number of opportunities during the class time where the assessment is happening. And then if they need more that you, in calculus, we're doing some outside of class reassessment. But precalculus we are just trying that. Which I think goes back to the Grading Conference's original goal of grading for growth, equitability and all of that.

And then going back to equity gap that you talked about, Sharona, earlier. So in our data we're seeing that there is gap reduction in all of the groups like male vs. female first generation versus not first generation, URM versus non-URM. And also we looked at prepared versus underprepared. So in all of those groups comparison we saw equity gap coming down. Some a little bit slower. Some a little bit faster, but everywhere there is improvement.

Jennifer Moorhouse: This isn't really answering the question that you asked, but what you were saying posed another question in my mind, which is, I'd really like to see some tracking on the F students. Did they reattempt the class immediately after failing it the first time? Because I think that what we're doing is we're fostering that growth mindset in students. We're telling them they can do it. They might just need that little bit of extra time. And those students, I think, often come out, anecdotally, I can say that my students that fail often tell me like, oh yeah, I'm taking it next semester, or they know where they went wrong and they're excited about the chance to reattempt.

Sharona: So, I have a couple of thoughts on that one. At Cal State la first of all, most of our students do not reattempt immediately. They, by the time that they know that they have failed, especially in the fall, their spring semester schedule at work is set because enrollment happens well before. So we tried an experiment last year, maybe two years ago. Where we told all of the fall students who failed in the standards based class that we would carry over what they had mastered into the spring, but only into the spring. So if they took a gap and we did not get a lot of uptake, we maybe had 10 or 12 students. And again, I think one of the reasons is the students who are failing for us are way low. They're not failing by a little bit. So our bar is seven learning outcomes passed outta 15 as a C minus. We have less than probably 10% of our class. So if we have a 40% fail rate, let's say, which that's a made up number, but if we did, 10% of them would have between two and six standards outcomes.

30 percent of them one or none. And of the 30%, 25 or 20% have none. So the bulk of our fails are not the students who in the past were scraped by. They're the same ones that would've failed in traditional grading. I think that it's more, for us, the ones that were struggling, we're mostly catching those during the semester. And sure if someone had five learning outcomes and they wanna come back and keep going, I'll, I'll take those five from fall to spring. The problem with spring to fall is you have the whole summer in there and that bothers me. I'd rather they redo it.

So I think that that's a very interesting question and I feel like it's a letting me look at my administration and say, you need to stop blaming the math class. Blame it all you want in traditional classes because honestly, I have many, many opinions about precalculus. But our Quantitative Reasoning Statistics Course has been very intentionally designed to really serve students. And so I think in that class, when they're failing, it's for much more complicated reasons than just their bored and disconnected in math class.

So I think that that's important work that this is now exposing and that we're able to really look at. I've had people tell me, Hey, what would you have to set your bar at to get an 85% pass rate in one of these classes? So in my non-supported class, I've hit 85%. Same class in the supported one, which is where we have our less prepared students, I can't get there without setting my bar literally at zero. So that's a lot bigger of a problem and speaks to different resources.

I did wanna bring up changing the topic a little bit. One other quote, unquote negative thing that I heard, which is, there were several talks that emphasized the determination with which their institutional context meant that students argued for partial credit. So this is happening in your expensive institutions who feel that they've paid for this grade, it's happening in other places most of the time. Number one, it's happening on a more than two level. I mean, students sometimes hate and sometimes love the two level. You got it. You didn't. That can be perceived as very harsh. I tend to use that in more of a revision environment as opposed to a more standards based. But when you use a three level where two of those levels are not good enough, students wanna argue "but I did so much". And I have noticed this coming up a lot of times when people are using numerical scales on their proficiency scales. So they'll do a 2, 1, 0 or they'll do a 4, 3, 2, 1 something. But what have you seen with those multiple levels of not yet.

Drew Lewis: I'll just say one thing first and then I'll hand it to Corey. The thing about the numbers. There's research that shows that once you put those numbers on a piece of student work, they stop looking at the feedback. And so I think that's a big part of what's going on with these rubrics. When you put those numbers, or sorry, not rubric, proficiency scales, right? Once you put those numbers on there, that's what they're focusing on. They're not focusing on that feedback or what those levels actually are.

Cory Wilson: I did something last academic year where I had a five point proficiency scale. Had exceeds expectations, meets expectations, review, not yet, and blank, something like that. I didn't actually see students arguing between those scales. I think that having the descriptors instead of numbers help them understand, or at least act like they understood, the distinction that I trying to communicate to them in their work. So, maybe it just a matter of having descriptors instead of numbers, my experience was positive with it. My colleagues and I are trying to coordinate calculus one at my institution. We're moving to a three level proficiency scale, so I'll be interested to see if students have the same, maybe not complaints, but the same issues that you were mentioning, Sharona.

Sharona: So I have a follow up on that, the three level scale. Can you describe what the three levels are and how they're gonna be described to students?

Cory Wilson: Yes. If you give me a second I can actually pull up the descriptions. Hey, Bosley, how's it going. Yeah. So we have meets expectations, revisions necessary and needs improvement for meets expectations. The description, it's a little lengthy. It says, the submitted work demonstrates sufficient command of the material, possibly including minor computational or transcription errors. Conceptual errors, if any, are minor in nature and deviation from given instructions is minimal. Student's work may suggest that minor revision of the material is needed, but fundamentally communicates that they understand the concept process. And then revisions necessary, Submitted work demonstrates some level of command of the material and may include computational transcription errors, conceptual errors or substantial deviations from given instructions. Student's work indicates that revision of the material is needed in order to communicate their grasp of the concept process. And then the needs improvement is similar and includes blank or off topic responses.

Jennifer Moorhouse: So I started out at one of those expensive schools, and I can tell you that those students who are arguing for partial credit, they would still be in your office arguing for more partial credit if you did give partial credit. That it's just the nature of grading that students, they're being ranked. They don't like being ranked. Nobody likes being ranked. I totally agree with the statement grading is bullshit. And I think that those students at those private institutions, the very expensive private institutions, they have a sense of privilege. They feel that they're smart and they should just be rewarded for being smart and not necessarily for doing the work.

Lipika Deka: Different but on the same topic. One of the thing I heard from people that alternate grading works well when you're a small class size and you don't have coordinated classes. So at Monterey Bay we actually tried standards-based grading in highly coordinated 15 sections. So, we really had to keep the model simple. Because when you have seven, eight different instructor teaching 10 different sections and everything is highly coordinated, you can't really have complicated assessment model. So that means we used met, not met yet. Of course, for each, we had to break about what we mean, and it's pretty similar to what most people are using. But keeping it simple made it easier to coordinate and made it easier to, the sections at the same time, in the same, in the same way. And because every student was doing the same type of grading model in precalculus and calculus. For some reason, even though students said having partial credit would have been beneficial, but they didn't quite complain about Met, not met yet, system. So it's also a lot about perception and what goes around.

Other thing I wanted to add here. So in this conference I heard from several group of instructors who are coming from institutions which are large public school institution who have really big class sizes highly bigger, bigger number of sections. They wanna do this and they wanna learn how are we doing coordinated 10, 15 sections. We're probably doing a 36 cap, but they also wanna do it at a bigger cap size. So I guess if we can keep it simple in some way, maybe then the bigger classes could also open their mind to alternate grading.

Debbie Narang: Hi, this is Debbie. I also have a system where I have S is success and it might mean that they perhaps have an arithmetic mistake or they mis copied in a way that didn't change the difficulty of the problem. I also have something an M for minor error, which is an algebraic error. What the problem looked like, but I wanna give them another chance. And I used to give them 48 hours and now I give them 10 minutes just at the beginning of class. I hand it back and say, Hey, you need to fix this. They're not very common, so it works okay. And occasionally I'll get a student who writes so badly that I can tell they know how to do the problem correctly, but I want them to know it's not okay that they gave me gibberish. That it's not that they missed one equal sign somewhere. It's that the whole thing is just poorly written. I'm like, okay, you have, you have this 10 minutes. Can you write this for me in a way that makes sense that other people can understand?

Sharona: So many good things to respond to. Oh my gosh. Okay. I'm gonna try to remember all of them and I'm probably gonna go in backwards order. So starting with doing this in a large lecture situation. I'm just gonna plug some of our other episodes. So we had Dr. Eden Tanner on in episode 21. Shockingly, this one's gonna probably be episode 109, but 21 repeated as episode 77. She was just one of the keynote at the grading conference this year, and I still can't quite figure out what she does, but she uses multiple choice quizzes and scantrons with, well-written, multiple questions to get chemistry, open access students to pass the American Chemical Society Final. So I'm just like, I don't get it, but I'm here to say it's doable in coordination. It's doable in large lectures. So that was stepping one backwards.

But then I wanted to go back and comment for a moment Cory on your descriptions. I absolutely love those descriptions because good proficiency scale writing, which often is said good rubric writing, is explaining in the rubric itself or in some reference document what it is that makes these things happen. And it's the thing that I struggled with the most in my history of math class when I first designed it. So my rubric ended up that it was that class was a combination of specifications grading and standards based grading. So the standards based grading just used the complete, revise, blank sort of three level. And with the revise, I never had to justify it. I was telling them what to revise, like, you know, Hey, you didn't give enough of race and gender, or whatever it was.

But the specs part was hard for me, because at first I couldn't articulate what was good enough and what wasn't, because it was a complete versus revise. And here's what we ended up coming up with that made a lot of sense to me. So we had four writing criteria, four mathematics criteria, and then two criteria for each of the parts of the project that were not directly related to the learning outcomes. So the four writing criteria were clarity sentence structures are easy to follow, writing flows from sentence to sentence. I wanna be clear, I wrote these with the help of an AP literature teacher. Thank you, Joe Zeccola, we have to get you back on really soon. So there was a clarity. Technical demonstrates control of language with few lapses in diction, syntax, grammar, and APA formatting. Errors that are present do not interfere with meaning or understanding. Language and argument support all claims, all major claims are are substantiated. Those are the exact same four criteria with one tweak for the math. So the writing has those four. The math is also clarity, technical errors and argument support. But the clarity, "steps follow" so instead of sentences, follow steps, follow from one to the next. Mathematically with explanation as needed, concepts are separated as needed. Notation is consistent. That was the technical side. Error was the same. Errors that are present do not interfere with meaning or understanding.

So again, it's very easy for me to say, do your errors interfere with meaning? And then I can have a conversation with the student. Okay, explain to me what you meant here. Oh, you understand it. That's not what you said, so go fix it. But what you said was great. And then the pieces for the other parts were interesting. So I really love how you guys have described that.

Cory Wilson: I can't take full credit for that as much as I would like to. I'm not be okay with me sharing their names, but if they, if they listen, they know who they are. Two of my colleagues at the University of Oklahoma started with gave me the template for these those descriptors. And then my colleagues at Oklahoma City Community College helped refine this for our fall 25 implementation of calculus. So I just wanted to shout them out. Give them credit for those descriptions.

ence as my own teacher was in:

Okay, so we're coming up, we've got maybe about 10 minutes left of this at most. What do you hope to hear if you're, if you come to math, let's, let's manifest. We're all gonna be at Math Fest next year. What would you like to see talked about? What would you be amazed if you heard? Formally or informally? I'm manifesting that I get to give my grading as a misuse of math talk. I can't imagine that MathFest is gonna, let me talk for an hour and a half to a big group, but I'm manifesting. What do you think? What do you wanna see next year?

Lipika Deka: Hi this is Lipika, so one of the thing I would love to come back to and see a learning management system or a grade tracking or a gradebook, that is easy to use. I'm just waiting for that. So, so much of talk this year was on that. So I'm hoping that someone develops that.

Sharona: So I just wanna shout out that version 2.0 of the Learning Mastery Grade book for Canvas is supposed to be launching this fall in theory. At least that's what I heard from Instructure. So thank you to the Canvas people in our community that work for Instructure. So yes, I love that manifestation. That's what we want. We wanna be reporting on our use of version 2.0 of the Canvas Mastery Grade book. Okay. Someone else.

Drew Lewis: I see lots of parallels between how our community is growing and how the IBL, inquiry based learning community, grew over the years. Right now, if you come to math every year there's a SIGMAA, a special interest group on IBL that organizes sessions every single year. So I think that might be a good long term.

Sharona: Okay. But a short term goal is maybe we need a SIGMAA. You're saying you want a SIGMAA? Or you want a SIGMAA that does annual sessions. Okay. I know. Okay. Well, so I was just clarifying. I think we could get it for next year. Why not? Let's make it happen.

Debbie Narang: Hi, this is Debbie. I would like to see more on ungrading and using alternative methods for grading upper division math classes. I did definitely see some here at the conference this year, but I would like to see a little more.

Jennifer Moorhouse: I guess I'll just second what Lipika said the technology really needs to catch up. Honestly, my what if? I'd like to see grades go away completely. I mean, a student has mastered calculus. Why do we need to rank and sort them between the A's, the Bs and the Cs, if they understand, they understand and let them move on.

Cory Wilson: I, I think one thing I would be interested in seeing, I don't know how it would work out, would be use of artificial intelligence tools with I guess it would be great with communication or even how instructors are using artificial intelligence to help them create materials, create assessments. It's not something I'm super familiar with. I'm a Luddite as far as AI goes right now, but I'd be interested in seeing stuff like that.

Lipika Deka: This is Lipika again, adding to that AI piece, one thing I heard a lot is the core of standard based grading is that you don't meet it the first time or second time, but you get good feedback on it. Then you learn from that feedback, learning from your mistake, and you go back and do it better the next time. And we heard from instructor across the board that student do not understand the feedback. Either they're not able to give clear feedback, or students are not able to understand feedback they're receiving in that. Students are not understanding what their mistake was and then hence not able to fix it or learn it to get to the next level. So I'm manifesting that maybe there could be AI tool and other technological digital tools that could help instructors to give clear feedback and help students to understand feedback. Because learning from your mistakes is what we're trying to teach in the standard based grading and alternate grading. So I would love to see some sort of technology coming to help with that.

Sharona: I completely agree. I've been really grappling with the role of AI and feedback. My concern is that the feedback is the human relationship. That is the part that is human between the instructor and the student. And yet we know that getting trained to get good feedback is one of the barriers to implementation. Because many of us as mathematicians are not trained to give good feedback. So I do think that maybe not outsourcing your feedback to an AI, but having the AI help you get better at feedback as an instructor and then having an AI that you yourself have trained, because that has started to happen, help the student in a scaffolded, guided way to interpret and work with the feedback. Maybe those are the tools that could be useful, but that don't interfere with the teacher student relationship. So I think that'd be amazing.

I have another thought that someone said something about assessment. One thing that Bosley and I came across this year is we were working school that had fully adopted, it was a middle school, virtual middle school, and they've gone standard based grading across the entire school. Okay. So they, they made the leap. They designed their courses. They're two years in or three years into it. And then AI came along. And they said, we're really struggling to know how to assess. We're a completely virtual school. They do have some synchronous classes. They have synchronous class time, but they cannot enforce in-person exams of any kind.

So we did a deep literature dive, Boz and I did, and we developed some criteria for authentic assessments based on the literature. I wonder if our community is mature enough now and there's enough of us that have been doing it long enough that, and are still facing this threat of AI, that maybe there'll be an opportunity at MathFest next year to really talk about what authentic assessment looks like in an age of AI with alt-grading. I personally, I'm not gonna bother too much to try to do authentic assessment in traditional grading because I don't think it measures what you're trying to measure. I mean, you can do it, and if you're still in a traditional grading world authentic assessment will help you. I think it's the next necessary step in alt grading right now, because so many of us worked really hard on assessments that are no longer valid in an age of AI.

So I think we've got a lot of things that we're manifesting for next year's Math Fest. We have the grading conference has already been scheduled for next year. It's June 16 thru the 18, so that's Tuesday through Thursday. So we're still tweaking the schedule. See if people would be more willing to come at that time. We have some other pitches, like if any of you go to an education conference that is not math related, so I was just TUEMLS transforming undergraduate education in the molecular life sciences. I've also gone to the American Society for Engineering Educations big conference. But if anyone knows about any other ED conferences, try to get alt grading on the schedule. I'll go to pretty much any discipline at this point and talk to them, but I think that would be amazing as if we all went out to our individual groups and conferences that we go to and start asking for sessions on Alt grading.

I know that there's, I'm looking at an AACU conference, but if there are any conferences that you, you, the listeners think that we should be trying to be at, let us know. Send it in on the contact form. Send it in by email. Let's get the word out.

So Debbie, Lipika, Jennifer, Corey, Drew, thank you guys so much for joining me. This was a really last minute invitation and it's just so appreciated. And so this has been The Grading Conference live from Math Fest. We'll see you all next week.

Please share your thoughts and comments about this episode by commenting on this episode's page on our website, www.thegradingpod.com. Or you can share with us publicly on Twitter, Facebook, or Instagram. If you would like to suggest a future topic for the show or would like to be considered as a potential guest for the show, please use the Contact us form on our website. The Grading podcast is created and produced by Robert Bosley and Sharon Krinsky. The full transcript of this episode is available on our website.

Boz: The views expressed here are those of the host and our guest. These views are not necessarily endorsed by the Cal State System or by the Los Angeles Unified School District.

Links

Chapters

Video

More from YouTube