Artwork for podcast Teaching and Leading with Dr. Amy and Dr. Joi
Strengthening Assessment in Accreditation: Building Evidence, Benchmarks, and Continuous Improvement
Episode 327th March 2026 • Teaching and Leading with Dr. Amy and Dr. Joi • Dr. Amy Vujaklija and Dr. Joi Patterson
00:00:00 00:25:33

Share Episode

Shownotes

Amy Vujaklija and Joi Patterson discuss the complexities of accreditation, emphasizing the importance of assessment and evidence. They highlight the need for clear rubrics, measurable language, and effective feedback loops to ensure continuous improvement. The conversation covers the challenges of data collection and utilization, stressing the importance of actionable data rather than mere accumulation. They also discuss the significance of benchmarks, the role of stakeholders, and the necessity of organized systems to manage data and avoid burnout. The segment concludes with a focus on the impact of accreditation on the broader educational ecosystem.

Transcripts

SUMMARY KEYWORDS

Accreditation, benchmarks, assessment systems, data collection, data utilization, feedback loop, continuous improvement, student learning outcomes, rubrics, measurable language, Safe Zone Training, climate survey, stakeholders, professional development, communication protocols.

SPEAKERS

Joi Patterson, Amy Vujaklija

Amy Vujaklija:

If you don't know where data are coming from or what you're using to provide evidence for a specific benchmark, then you can get lost in those weeds very quickly. Teaching and leading are rewarding, but complex,

Joi Patterson:

and whether you're in a classroom or a campus, new challenges are

Joi Patterson:

always emerging. I'm Amy Vujaklija.

Joi Patterson:

I'm Joi Patterson, and this is teaching and leading, where

Joi Patterson:

we explore teaching, leadership, equity and the systems that shape

Amy Vujaklija:

education and how educators can grow, explore and have a meaningful impact as teachers and leaders. So let's get into it.

Joi Patterson:

Hi. Dr Amy,

Amy Vujaklija:

hello. Dr Joi. How are you today?

Joi Patterson:

I'm good. I am ready for our next segment, all things accreditation, right? This is probably my favorite segment, which is on benchmarks and assessment systems when we're talking about accreditation. And so for me, I think assessment and evidence are at the heart of accreditation, but it also can be the most challenging component. It certainly does take the longest, right? So I am so looking forward to this conversation, so that our listeners can learn more about what I think is the heart and soul of accreditation, and that's assessment.

Amy Vujaklija:

I agree it is that heart and soul, and I didn't like that part to begin with. You know, I'm the big, big thinker, the big picture person. I like the partnerships and what things look like from the 10,000 foot view, and who works with whom, and how partnerships can develop.

Joi Patterson:

And assessments, it's all in the weeds. It is in

Amy Vujaklija:

the minutia, and sometimes that can be really, really overwhelming. The part about assessment, I think that can feel like it's the hardest part of accreditation is making sure that they meet the agency, the accrediting agencies criteria. There are specific assessments that you have to use to monitor students progression through the program, student learning outcomes. But what rubrics are you using to measure those outcomes? So agencies, typically, they may provide rubrics for you, but even so, how are you implementing those rubrics? Right, right? And if you have to do design your own rubrics, the key to that is, what I found, is really thinking about using the standard language, whatever the language is, bake that into the rubric, and then, using very measurable language, even Look Fors, or language such as the candidate can meet these criteria evidenced by or specific Look Fors. But this is all dependent. And I think this other piece that doesn't get talked about and it doesn't get recognized enough, is that feedback loop and the development of the assessments to make sure that you're actually measuring what you intend to measure?

Joi Patterson:

Yeah, absolutely. I mean, I think for many people, it feels like the hardest part is because this is the part you have to prove. This is the part you have to prove impact. So if there's seven different standards or components to your accreditation, and this is one of them, this is really the one right that you have to prove the impact. This is all of your quantitative data that the numbers don't lie. You know, everything else I think is subjective. This part is not subjective. And so sometimes that is the most challenging. So, so then you have a responsibility, and you're really good at this of making sure that you do have an assessment system first of all, so there's ways around where it doesn't have to be the hardest, because typically it is, but there's ways around it. You know, by having a really clear assessment system and timelines and benchmarks and. You know, things like that, and opportunities for decision making and sharing information so that you can make improvements, and all of those things. But yeah, this can be the hardest part if you're not going into it prepared, but this is the one where you have to actually prove something because it's measurable. So speaking of data, Amy, and this gets into the quality, what do you think about the difference between collecting data, right and using data, two different things. So the collecting of the data and using data.

Amy Vujaklija:

So I want to say a couple of things there you remember. Answer this with a question. Do you remember when we talked to Dr Damon Williams a few years ago about not shelving data? Absolutely. It's almost like we want to collect everything we can possibly collect, but where does it go? Where does it live?

Joi Patterson:

Yeah, I don't know what the fun is in collecting, right? But it seems like that's where the fun is in collecting. To say, I collect it. You know, I did the survey, I did this thing. I have this stuff.

Amy Vujaklija:

The thing is, just because we can measure something doesn't mean it's meaningful, absolutely, absolutely. We can measure how tall people are and see what the average height is of the people in our program, right? That is, there's

Joi Patterson:

no way now let's we're going to go build the doors, the, you know, the walk people walk through the hallways, and change that somehow, because we found out that they're too short and our people are too tall. That's how you use the data, right? You know, I can recall, and I really like this part. I've been part of many investment teams, but when we do something even like our climate survey, and I do want to go back to assessment just for a minute, I want to go to it broadly. When we're talking about assessments, we're talking about anything that collects information. So that could be your quizzes, that can be your test, that can be your surveys, that can be your reports. It can be a student forum, some of it can be anecdotal information. So there's lots of ways that you can build assessments, and so you're collecting all of these different things as your assessments, and then then it becomes your data. Right? The information from the assessment becomes your data. But when we did the climate survey, one of the things that we saw in the climate survey is that the LGBTQ community said they had the highest rate of feeling harassed. Okay, so good information, right? So we collected that information, good information to know. Now, what are you going to do about it. So as a result of that, we started having Safe Zone Training. We started doing this, what you know. So we started doing all of these different things, you know, and that's the using the data, right? And so when we actually make improvements or changes, anything where we shift, because the data is speaking to us, it's literally speaking to us. And say, I need to go this way or that way, and what are you going to do about it? So it really should be informing you and shaping all of your next steps. See how exciting

Amy Vujaklija:

this is. Amy goes back to things that we were talking about in a previous episode. In how long the accreditation cycle is, it's seven to 10 years for that very reason, yeah, if we are collecting data, and like you said, assessments can look a lot of different ways. The ones I was talking about with rubrics are maybe key assessments, major tests or projects that measure a student learning outcome for a particular program, but it might be a survey to both graduates or students, faculty, partners, and if we are waiting and just collecting data, then we can't incorporate or change, do anything that will make our programs Better, our university better, and part of that accreditation that I appreciate so much is the continuous improvement aspect. We've been very fortunate to have great accrediting agency team members who have provided incredible feedback and pro. Prompted us, really urged us to think about things in terms of, what are you doing to improve? And you can't do that in a short period of time.

Joi Patterson:

Nope, you cannot. And so you have to really plan this way out. So just talking about, how do you get your assessment system to be accreditation ready. You know, for our listeners, I'd like you to walk us through some of what you do to make it clear, consistent and make sure your team is aware. Does your team even know that you have an assessment system and do they know their responsibilities within that system.

Amy Vujaklija:

Those are some really good questions, and it makes me think about how we have in my office of accreditation and assessment really implemented systems to manage data, because if you can't manage and organize it again, it sits on that shelf and gets dusty and you don't know what to do with everything you're collecting. So for instance, we get reports for the Content Test. Our candidates have to take a licensure test before they get a license, and we are provided with reports of those scores well, now that we have that kind of system that can organize and disaggregate these data into candidates who are maybe first generation or by race and ethnicity or by age, if we wanted to, because we have all of these data in our database. Well, now we're forward thinking. Instead of collecting this information and just storing it up and providing it to our accrediting agency when it's time, we are being proactive in disseminating this information to our program coordinator so that they can review these test scores and then make adjustments or provide interventions for students who are not doing so well. That's just one small example.

Joi Patterson:

Yeah, I think another good use is even budgetary, right, and what needs to be funded? Where does your money need to go? Right? So I think that's a really good argument to take to your dean or your chair, whatever about whether it's academic support for students, right and we need additional funding there, or whether you need another assessment person or things like this, your data could really tell that story of where your needs are. And so sometimes it changes even the structure, not only the budget, but the structure of your department based on what those needs are. I think the best way of setting up your assessment system, which I love to do is by benchmarks, right? I think that that is a very good place to start. So creating an assessment system can feel very, very overwhelming. I think starting with benchmarks is the easiest way, and then going from there, you know, and you might have several benchmarks. You have your benchmarks accepting the student. And then you have this whole list, what are all the things that are required to accept the student? And then you have, what assessments are you going to use to collect it, and the data from that and all that. So list all of those things so you have your acceptance. And if they're moving to clinicals, that might be another benchmark. What are all the things that you're looking for? What are all the requirements, what are all the assessment tools that you need to determine how they move there, and then maybe for student teaching and maybe for completion? But you decide what these benchmarks are, right? So breaking this up in chunks. So I would say that is the very first place to start of breaking whatever the program is that you are evaluating into chunks of benchmarks, and then under each benchmark, making a decision about what's required to show evidence that your candidates, your students, are making meeting those expectations or those requirements. So I think that is a very foundation of creating your assessment system. And once you have your benchmarks and how you want to assess them, what's required, then you can start to build the rest of your assessment system.

Amy Vujaklija:

You know, I'm glad you mentioned benchmarks and what the list of data you would use to assess or to show that you're meeting or your students are meeting those benchmarks. It does tell a story. Something that can be really overwhelming, and this is where your brain is so helpful the way you think about charts and systems and organizational structures, if you don't know where data are coming from, or what you're using to provide evidence for a specific benchmark, then you can get lost in those weeds very quickly. For instance, we have a student teaching application, and in that electronic digital document, there are approvals that take place, the advisor is signing off that the student teacher will have met certain requirements, and the program coordinator is recommending I am doing my checks to make sure that they their accounts are licensed. You're ready, so just something as simple as an electronic document that is routed can be that benchmark from one step to another to an internship, for instance, or a student teaching practicum. And that is an easy implementation to have a signed document that is routed from one from the advisor, yep, yep, to the coordinator, yeah.

Joi Patterson:

Having all those systems in place changes everything. So I want you to think back Amy, because both of you and I have experienced this no fault of our own. Sometimes it's the newness of it, because I couldn't remember when I was new at it. But a lot of times it's because time just wasn't on my side, like I accepted the position and accreditation is two years away, you accepted the position and your spa is a year away, and things like that. So sometimes time is just not on your side. So I want you to think back to then and to now, because for many people, Time is not on your side. And how do you get buy in for assessment without burnout? And so just very honestly, thinking about when you first had to do accreditation, whether that was a spa, any kind of report, and not having enough time. And how do you and then, how you got to get other people, galvanize other people. And how do you do that without burnout then? And how do you handle that now? Because you've learned a lot. Oh, certainly. And then it helps that you've been at your institution for a while too,

Amy Vujaklija:

and I want you to think about that too and talk about your experience. That would be a

Joi Patterson:

horror story. A horror story because literally, there were some nights where I didn't leave the university, because it was all hands on deck, and it can create a lot of anxiety, it can create a lot of burnout, but you learn from those experience, but sometimes things are out of your control well.

Amy Vujaklija:

And that goes back to something we had mentioned in another episode, and I think in every episode we need to make sure to bring this to the forefront. Is that systems being in place, organization being in place. And even if a person leaves a position, an administrator moves into another role, that happens, and so having that organization in place is so very important. Now that wasn't what happened whenever I took on a report of whatever magnitude it was, I was lucky, though. The way I was lucky is that it was a revised report that I was submitting. I had loads of feedback to work from, and so that was that was fortunate I was able to take feedback and implement that feedback and do that continuous improvement that accrediting agencies call you to do. The thing is, I like the work, and so an institution really needs to think about who they put into that position. It's not something, oh, well, accreditation is going to come in a few years. You know, anybody can, just like kind of we can, we can pull it together at the last minute. That doesn't work. It needs to have one person, at least, who is dedicated to that role and knows more than what one department is doing, but can see across an institution or across a division. But the other thing is, I was lucky too that I had some support in people helping diving in. What can we do to help? And knowing that was really helpful. I think that what helps with reducing the burnout a little is that if people understand it's a big responsibility, if they are dismissive, if any administrator or colleague or faculty members, oh, it's that report. Well, that's not the case, and that can be very diminishing, and that can really deflate someone when their efforts are not recognized. We've both been in situations in which we have been recognized and and praised for our work, and it's very rejuvenating,

Joi Patterson:

very rewarding, and you you work harder, but I want to mention something that you and I did, and it sparked this memory that you and I actually took training in protocols and communication protocols, which became very, very valuable when it came time to for accreditation and bringing people in and bringing them on board of, how do you listen? How do you include everyone? How do you take their ideas, how do you build on their ideas before you just go to the next person. And you know, we started doing those things by writing it down. Okay, let's not leave this idea. Let's, let's have some more touch points on this idea before we move to the next one. Because otherwise we're not really listening. We're just waiting for the person to say what they're going to say, and then we move on, and you have all these different comments. So I think, you know, I don't want to get too much into professional development, but I think that if anyone is going to gain some professional development for accreditation, aside from going to the conferences, you know, of learning that, I would say some communication protocols are very, very very powerful. But Amy, I want to kind of finish this up. I want to go back to something you've said a couple of times, which is closing the feedback loop. Okay, so you talked about continuous improvement, and a couple of times you talked about the feedback loop. What does that look like when you're closing that loop and practice.

Amy Vujaklija:

So what we have attempted to do, I can't say we do it perfectly, but we've attempted to really think about what do our stakeholders say and how our preparation practices, and this is for initial licensure for teachers in my area, but also for the other areas in the university in which we are sending people into professional fields. How are we impacting those fields? What do the professionals who are in the field, whether they are school administrators, teachers, who will then be colleagues? What are they saying about the preparation of our candidates one year out? And so I did. I collected data from surveys about our candidates who were a year out of the program, like, what can we do better? And from the candidates themselves, so that's important to see a year later. What is memorable about the program, what do they feel like was impactful, and how are they impacting the students who they serve and our administrators too, who leave our advanced programs, endorsement programs. How are they impacting? And so when we take that information and we can, well, we like the kudos, you know, it's kind of that closing the loop.

Joi Patterson:

Oh, they're out there said you were going to do, yeah, right. And it's hard sometimes when you in this kind of program, that the beneficiaries of what you're doing is not the people in front of you, it's the people that are in front of the people that you're preparing. It's the P 12. In this case, it's the P 12 students that are the beneficiary of what you're doing, not your candidates, and so that some of that data you're talking about, of going back and looking at how they are performing is really what you need to know if your program is doing a good job. Did you impact that P to 12 classroom? So it's very, very, very, very interesting dynamic, but you mentioned stakeholders, so this is a good place for us to stop, because our next segment is on stakeholders, which happens to be your favorite.

Amy Vujaklija:

Okay, well, that sounds like a great topic for another episode. Do. Amy, thanks for listening to teaching and leading with Dr Amy and Dr Joi.

Joi Patterson:

This podcast is supported by Governor State University.

Amy Vujaklija:

Show Notes and resources for this episode are available at G, O, v, s, t.edu/teaching,

Joi Patterson:

and leading podcast until next time,

Amy Vujaklija:

keep growing

Joi Patterson:

as teachers

Amy Vujaklija:

and leaders.

Links

Chapters

Video

More from YouTube