Artwork for podcast Everyone Can Design
3 - Level-Up Your UX with User Research Methods Anyone Can Use
Episode 313th March 2024 • Everyone Can Design • Alexis Allen - FM Design University
00:00:00 00:36:33

Share Episode

Shownotes

User research is incredibly important to the success of any potential app. But what exactly is user research, why is it so important, and what are the methods you can use to carry it out?

In this episode, we deep-dive into the topic of user research methods in UX design. We cover a wide range of topics including:

  • How to conduct effective user research to inform the design process
  • The role of interviews in user research and how to conduct them effectively
  • Why it’s important to challenge your own assumptions and not take everything at face value
  • Why you need to understand the user's world to design effective solutions
  • How the design process is similar to the scientific method

This episode provides valuable insights and practical examples for anyone looking to improve their understanding of user research and enhance their design strategy. 

Transcripts

Ep 3. Level-Up Your UX with User Research Methods Anyone Can Use

===

Matt:

We need to think of these people as humans that exist in a world that is beyond just the thing that they're trying to interact with-- whatever you're designing, right? Like you're designing a piece of software that they're using for one small portion of their day.

Alexis:

The attitudinal research is important. You know, "How do people feel about this?" Overall, are they positive and they just have these few little roadblocks, or is their day just one frustration after another? And what's causing that?

Welcome to Everyone Can Design, a podcast exploring the world of user interface and user experience design. We want to demystify UI/UX design and make it accessible to everyone. I'm Alexis Allen, a Claris FileMaker designer and developer, and the founder of fmdesignuniversity. com. And I'm Matt O'Dell, an Experience Design leader, strategist, and educator.

We bring you practical, actionable advice to help you improve your UX design skills. You can find detailed show notes and more at www. everyonecan. design.

Hey Matt. This week, we wanted to talk about UX research methods. We've been talking about them, I think tangentially, but we haven't really explained what they are or why you might want to use them, why they're important for UX.

I know that you probably, in your job, do a lot of research. I do a fair amount of research, but I'm really interested in your perspective from the corporate world, where you're maybe more into research as a discipline of its own.

So I know the value of research, but I think we can explain it to people and see why they might want to use it, what's the purpose of it, how it can help them. So, why don't we talk about research methods this week?

Matt:

Yeah. I've said this word "method" a bunch of times, and I realize even myself after I say it a lot, I'm like, "Oh yeah, I know what that means." But, probably when I was starting in design, I was like, " What do you mean by methods? What is that?" And then I, that took me a while to be like, "Oh, it's just a toolkit of activities basically, that you can do. And there's a bunch of different types of activities that you can do in any type of design thing. And those are what they tend to refer to as methods."

And so, yeah, I, I think starting with the idea of like, what are those activities? What are those different tools that you can use when you're trying to learn something and research something?

Alexis:

They're kind of ways of getting information, I guess, right?

Matt:

Yeah, exactly. I mean there's methods for all the different types of design that you're doing, but in this specific case , for, yeah, what you're trying to learn. What do you want to know? What are you trying to validate? What don't you know that you want to have some sort of sense of?

Alexis:

Yeah, so, if we're thinking about it from the 5-Step Workflow Design Framework, we've started off with defining what it is that we're doing. Uh, we have some idea what the problems are-- we talked about that last week. And then we're doing this research, this sort of vague Research phase, which can take a bunch of different forms.

So let's talk about the types of research that there are and the different kinds of methods that you could use to do that research.

Matt:

Yeah, where do you start, I guess -- where do you start with the work you do?

Alexis:

So I tend to do quite a bit of qualitative research. So qualitative research is really finding out about people's environment. Who are they? What kinds of things are they doing? Describing the users, describing their environment, describing their challenges.

So, we're doing things like writing up personas. We're doing interviews with people. We're meeting with them, finding out what it is that they do, how they do it, what are their pain points. I use those quite a lot. I don't use surveys as much. Sometimes interviews and surveys are kind of lumped together. I tend not to use surveys too often, although, we can talk about the use, or lack thereof, of surveys.

I do like to use personas and I also teach people to use personas, because when you don't necessarily have a clear idea of who it is that you're designing for, a user persona can really help you envision that person, and you can bring together different information that you learned into one description of the audience.

Rather than having, "Oh, this person wants to do this, and this other person wants to do that," you can kind of merge some of those details and make it a little bit easier to manage. And then of course, feedback as well. So qualitative again, where people are giving you feedback and you're asking them-- well I think what we said last week-- we don't ask them, "What do you think? We ask them, "How easy is it for you to complete this task? Does this meet your needs? Is there anything missing here? Are there any errors?" That kind of stuff.

And then, usability testing as well-- I guess that's part of the feedback, usability. I don't necessarily do it as a separate activity, but you're always testing usability, trying to make sure that it is working from a usability perspective. So that's asking the users to find out if there are any roadblocks or problems. And then there's the quantitative side, which I do less of and maybe you do more of.

So why don't you tell us about quantitative?

Matt:

Well, yeah, I think, there's qualitative, and I will say, very much in design, we tend to focus a lot on the qualitative research. And we lean more in that direction, or we weight that a little bit more than quantitative stuff. In working in corporate, there tends to be a mix. Because there's always the question of like, "Well, you heard from one person in an interview that they felt this way, or that there was a problem here. But is that really a problem? Or, how big of a problem is that?"

There's so many things you could go after and you could try to tackle. So like, how do you decide what is the right thing to go after, and what isn't the right thing to go after? And that's where, you know, certain quantitative methods can come in.

Um, there are people that tend to run a lot of surveys. We talked about surveys as a qualitative-- a lot of people use it as quantitative. It's not a great way of doing it because everybody's way of saying like, what's a one versus what's a 10 on a 10-point scale is different.

Alexis:

It's very subjective.

Matt:

Yeah, exactly. And so there's, it's hard to like really judge something. I will say, a lot of what we've tried to do is, once you have the qualitative side of things of like, "All right, I'm seeing that there's a problem here," then figuring out, "What is the right quantitative method to measure that?"

So again, surveys might be a way, other ways might be actually like putting in logs or monitors within the application itself.

Alexis:

Yes, absolutely.

Matt:

Yeah. Diary studies are even another way, even though that is on that level of, it's also somewhat qualitative. You can, if you have enough people doing it, get a sense of, "How frequently is this problem happening?"

Getting them to track things of like, "How much time did it then waste for you, if you had to go through this problem or had to deal with this?" So even though that's not perfectly quantitative, you're getting more of that by having more people and not having to talk to them directly, but getting them to fill out and track some of this stuff.

Diary studies are a great way of doing that with like a smaller audience, but, and still be able to like, reach out to them, but not have to involve everyone.

Alexis:

So, tell me about diary studies. I don't think I know about this particular method.

Matt:

Yeah. Diary study is basically finding a group of people that are willing to basically write in a quote-unquote diary, if you want to call it that, anytime that you want them to. So you basically set up a prompt, like, "Hey, any time you run into an issue this week, for this whole week, I want you to go into this spreadsheet or go into this form, and I want you to like, fill out an entry. And you know, put the date, put the time. Describe what the problem was."

In many cases you can, you know, give them dropdowns, or give them other things to, if you know, like you're looking in this area, to make it easy for them to fill out. 'Cause obviously they might be filling this out a lot throughout the week, so you want to make it as easy as possible, but give them some things to fill out.

And then, so what's the thing you're looking to do? So, as an example, I've helped a team run a diary study where they did want to know about, like, "We hear that this audience of people have a lot of different problems, and we're never really sure which ones are taking up more time of them or not. Or are these really serious problems? Are they happening all the time? Or are they only happening once a week or what is this?"

And so they set up a diary study to have this audience of people put in, like, "Anytime you run into a small problem, fill out this thing, and then, describe what the problem was." And then there were columns for like, "How much of a hassle is this for you? How do you feel? Is this like, really bad, or is this just like an annoyance, a small level annoyance?"

And then also filling out, "How much time did you waste?" Or, "How much time did this take up that you would've been doing something else, but you had to troubleshoot this issue?" And it was interesting because we got back some interesting things where someone was basically marking like, "I still haven't resolved this problem over multiple days. I'm still dealing with this problem on day four, on day five. This problem hasn't been resolved yet." And that was like a great story that you could tell. There was both a mixture of qualitative and quantitative where you could tell that story of that person.

Alexis:

I was just going to ask that question. You know, where does this fall?

Matt:

But then you're also able to say like, "This type of problem that this person, it took them four days to solve, 50% of our audience dealt with a similar problem within the week that we ran this study. And on average it took people four to five hours." So you can start to like tell a thing of like, well, if we solve this problem, it's almost like opening up a half a day per person, you know, for the, whoever the audience is. So that was like the example of how you use that for both qualitative and quantitative. Tell a story, but also put some numbers behind it, so people are like, "Oh yeah, we should fix that problem, 'cause that would really save us a lot of time and really make things run smoother."

So that was, yeah, diary studies is a great example of that.

Alexis:

Yeah, so it's really interesting. You run the gamut I find with users. Sometimes you have people who complain about every little thing and even small things appear to be just large headaches.

And then you have the other extreme, where people are sometimes dealing with these really onerous problems that are not that difficult to fix, but they never report them. You never find out that there's this problem that you could have fixed three years ago, but nobody ever mentioned it before. So yeah, I find that that's super interesting. It's just sort of a peek into people's personalities.

So this leads into this other next question, which is, when do you use some of these methods? So I talked a little bit about using them in the beginning, where before you've created anything and you're trying to understand who the people are, who the audience is, what are they dealing with.

You talked a little bit about doing these diary studies and getting some statistical metrics, gathering some quantitative information that you could summarize into a document or into some sort of narrative that would make some sense and would suggest some directions that you might want to go.

Matt:

Exactly.

Alexis:

But, when do you use these various things? So for example, let's take the diary studies, not something that I'm super familiar with. When would you decide that it's appropriate to use a diary study, for example?

Matt:

. Well, I think I'll take your question. I'm going to answer it slightly differently. One of the interesting things with research and also a common misconception that runs into people as they get into design is this idea that research happens at the beginning. And that if you see the, like, Stanford stuff or the other people, and they have empathy at the very beginning, right?

That's my time to get empathy with my audience, and then I never come back to that. And that's not really how it works.

Alexis:

We're just going to empathize with them at the beginning and then just forget about it.

Matt:

Exactly. So, you do want to obviously talk to them throughout the project, but throughout the project, you are trying to understand different things from them.

There was a really great thing that someone had put together, I can't remember who, and I'm sorry I'm not giving them credit for this, but they talked about kind of three different types of things you're trying to understand in research.

One is like, are you solving the right problem? And I think that's what we talked a little bit about last week, which is that I have an idea of what the problem is that people have told me. I have a good sense of that. But is that the real problem? What's the root cause of this? Let's try to understand that and make decisions about where we should go.

So the diary study really works well there. Kind of similar to your traditional foundational research, your interviews, the things that you're talking about, one-on-one interviews, observation, ethnographic research, where you're following people around, that kind of stuff.

Alexis:

So you already have a hypothesis and you're trying to prove that hypothesis through doing some user research?

Matt:

I wouldn't say even prove it. It's more of like, they have told me this is what the problems are that we're solving. Is that really the problem? Are there nuances to the problem that I don't understand? And this is going to help us fully define the problem.

You know, there's always that phrase... I keep quoting people and I don't remember where the quotes are from, but there's that quote from, was it Einstein or someone that was like, I'll take weeks to define the problem and then, once I know what the right problem is, then I know I can get to a solution really quickly.

Again, I'm completely butchering the quote, but it's that kind of idea of like, what's the right problem first, and let's define it and know the intricacies of it.

Alexis:

It says if I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.

Matt:

There we go. You can, you just correct me. This is what you're going to be for. I want to say, I know that there's a quote about this. You just tell me what it is and then you can google it and then correct me.

That's perfect.

Alexis:

No, I'm not correcting you. I'm adding to what you've said.

Matt:

No, agreed. But with that, the methods of, yeah, diary, studies, interviews... I'm trying to understand what is the right problem or problems that we're trying to solve.

But beyond that, and this is where sometimes some projects fail, is, "Okay, I know what the problem is. I'm going to come up with a solution. Do I have the right solution?"

So in that case, I have probably different methods that I'm using in terms of taking very simple prototypes or simple ideas, and getting feedback.

Again, not asking them how they feel about something, but trying to show them something and see if , if they think it would solve their problems, have them start to work through some things. So do I have the right solution? That's the second one. And then the last one, like "Is it done right?"

And that's where I think you were getting into like, usability testing, and that kind of research where I am taking a solution that I think is done or should be working, I think it's going to work, or at least it's something that people are using and I'm trying to then understand the problems with it in order to tweak it to make that solution be as best as it can be.

So all of those are research methods and all of them can be used in different things, depending upon, again, the problem you're trying to solve and like, what you're trying to understand and what the type of project is. So, Yeah.

Alexis:

It's a bit of a ladder as well, right? You're not going to be just picking and choosing one of these three phases. You always have to start with problem solving and are you solving the correct problem? Then...

Matt:

You're doing all three in every project. You're not just doing one.

Alexis:

Yes, and you're doing all three of them somewhat continuously, so that when you're come to testing the solution itself, you've already done some preliminary research about whether it's the right problem, whether it's the right solution, and then you're verifying if that's true. And I, I think we did talk about hypotheses last time, and , that's one of the reasons why we're saying, you said, "Strong convictions loosely held."

Right? So I'm quoting you now.

Matt:

Okay, that's fine. I'll take it.

Alexis:

So it's because you're starting off with some idea of what you think might be true, but then you have to check that because you don't know that it's going to be true. And that's one of the reasons why we're doing this research in the first place. The other thing also we should probably mention is that there is often a distinction between attitudinal research and behavioral research, and this could fall into the qualitative or quantitative side of things. It's just to say that the attitudinal really describes what people say about their job or how they feel about it potentially.

What their attitude is around the research space, whatever it is, if it's software or work, whatever the case may be. And then the behavioral part is like, what are they actually doing? I like the idea of building in logs and monitoring, because that's going to just capture without having to ask anybody what it is they're doing. How often are they running such and such a script?

Not at all? In which case, it's not super important. That sometimes is really hard, especially if you've had a solution that's around for a long time and it needs maintenance. And you go to somebody and ask, well, how often do they run this report? And nobody knows. Because maybe nobody runs it, but you don't want to get rid of it because maybe you just haven't found the person who's... that's the core thing that they need.

Or maybe it's only once a year, but it's super important.

Matt:

Yeah. I think another way of saying attitudinal that I like-- or we talk about feelings or sentiment. You know, survey responses or reviews on the internet or whatever, and look for the attitude of how people feel about stuff. But again, you can get that through talking to them, getting their sentiment. How do you feel? What are you saying about these things, and what can we learn from that? That's a very important thing to know, again, where the problems are, because you want to understand where people feel bad, and then also you want to know where they feel good, right?

Like, when in their work, do they feel like they're satisfied? Do they feel valued? Do they feel like they can get the things done that they need to get done? So you want to understand their feelings on both sides of that, 'cause that could help you judge what success is in the future. If you know like, "Hey, people that are going through this, at the end want to feel this way. So how do I make sure they feel that way in whatever I design?" So that's where the attitudinal stuff comes in.

But I definitely like where you took the behavioral side of things where, like again, a usability test is a type of behavior. There is attitude in there as well of like, how did you feel when you did that? But in many cases you're trying to be like, can they get through my design and do the thing that they know how to do in some level? Is my design clear enough to allow them to do that without any sort of issues, right? So that's where that behavior comes in.

If you need then quantitative side of behavioral, that's then where, yeah, you can put in logging in your solutions, you can put in monitoring, you can do other things as you said to like see not just when something runs or who ran it, but you could also see how long it took. Right? Like, if a lot of people are complaining that like, "This takes forever!" You're like, "Does it really take forever? Or maybe, is there a specific time of day it takes forever? Or for specific people it takes forever? I don't really know enough to know like what it means to take forever."

Alexis:

What does forever constitute? Is forever 10 seconds? Or is forever three minutes, or an hour?

Matt:

Yeah. Yeah, exactly. So much of the work I do these days is what we call, we call it design strategy, but it's basically like, how do you storytell the problems, and the solutions? How do you like, tell that story, so you get people bought in on it? But also, how do you quantify it in some sort of way so that people are like, "Yes, I agree that that's a problem. That's a problem I want to fix ,and I feel it and understand it, and it resonates with me.

And so that's where, again, like both the quant and qual and how do people feel, and what are they doing? If I can answer all four of those, you know, put a little quadrant, like a two-by-two, four-up quadrant together, if I can answer all of those and get the outputs of that to resonate with people, it's a lot easier to get your work done.

Alexis:

Yes. And I like what you said about telling a story. It sounds maybe a little bit, airy fairy to some extent, "Aw, you're just doing design. What's this storytelling business?" But that's also something that I find a lot of developers struggle with is, "How do I get people to adopt this solution? And I find that one thing that really helps adoption-- I guess maybe because it helps storytelling-- is specificity. So if I say something like, "Oh, see that car over there," versus I say something like, "See that red Ferrari over there with custom rims?" I'm talking about the same thing, but I'm just conveying much more specific example of that.

To some extent, the research-- user research-- that we're doing, can help us fill in some of those details and prove to the audience that this is actually true. We're not just listening to management, and management says, "So and so, you know, department is slow or whatever." And it's very vague, and kind of, there's a general problem, it's just kind of like a cloud of issues. But really, we're not there to solve a cloud of issues. We need to actually hone in on specific problems, and that helps specific people, and then-- it, it just is good in all regards. It's good for you, because you have more information about what are you actually solving and how would be the best way to actually solve it, what would be your best approach? And then also the people hearing you-- similar to what we were talking about last week with workflows and sharing that with your client-- then they can also see their words reflected back to them and they can agree, "Yes, this is my problem. You've described it accurately and I have faith that the solution you are proposing actually will work."

Matt:

Yeah, I definitely have been there when I worked again in consulting, building, and designing for businesses, where, you know, you had mentioned earlier-- we've talked about this a few times-- of starting the project with like a definition of what the problem is. And a lot of the clients that are paying the bills and bringing you in already have their own definition. They already know what the problem is. And they're probably right at a certain level. But they might be missing things, and in times that they are missing things, you want to be careful in how you present that to them. And this is why, again, the mixture of quant and qual, and behavior and attitude and being able to tell that full story is helpful. 'Cause if you only come in with qualitative stuff and then they're like, "Well, I don't agree with that. Like, I already have in my own head the mental model of what the problem is. And whatever you said here, that's probably just came from one person. How is that really a problem? How can I really-- I don't trust that."

And a lot of times people won't trust it 'cause they're just like, "Well, I don't see the numbers there. You just heard a thing. You can hear a thing from anybody and how true is that?"

So we run into that a lot of like, they have their own mental models of what the solution is already. They probably just want to tell you what to build. And you're like, "Well, hold on. Maybe that's it. I'm not going to say it's not. I'll take that down. That's an idea. We might have other ideas. Let's figure out what we need to solve here, and what problems we're solving."

Alexis:

The other thing you can say also is, "Let's find proof." They're essentially presenting to you their hypothesis, right? So, okay, let's start with that as our hypothesis, but then let's actually prove that that's true. Let's not just assume that that is correct-- and it very well might be, and you might come back with a resounding confirmation that that is the problem. Cool. You're not there to disprove them one way or the other. But you do want to look for evidence to back up whatever assertions you're basing your future work on. Because that's really important to defend your decisions, right?

We could probably have another podcast just on defending your design decisions, but you want to actually put those with evidence behind them, not just say, "Well, so-and-so told me that this was the case." Because so-and-so might be biased, or they might not have the full picture, or they might be approaching the task from a certain point of view. Whatever the case may be.

Matt:

Yeah. Actually, here's another question I have for you, because this is a question I think I've gotten, anytime I've presented to an audience of developer consultants who are building for businesses, is, how do you get your stakeholders to allow you to talk to the people that are actually using the solution and not just, "Talk to me. I'm the stakeholder. I know everything." How do you deal with that?

Alexis:

I just ask them. I've never actually had anybody tell me that I can't talk to people. I guess maybe because they're assuming that I'm going to do that sort of work, and so they're just open to it? But I think if you just make it a part of your process to talk to people, and you explain to them at the beginning that, "I'm going to be doing some interviews. What I'd like to do in order to figure out what the best solution is for you, is to actually talk to some folks who are up to their elbows in the work and ask them specific questions." I mean, I don't think you even need to be that specific and say what it is you're going to-- just say, "I want to talk to them. I'd like to talk to them, if possible."

I've never had anybody actually refuse me, but perhaps it has more to do with the way that the relationship is set up in the first place? I'm not sure. Sometimes I get the impression that some clients feel a certain ownership over your work, and perhaps that's a result of the FileMaker platform being so accessible.

And some business owners started off as the first developers of their own solution. And when they bring somebody else in, sometimes they feel like that person's just an extension of them and they're executing what the person wants. And perhaps you feel hemmed in by that relationship and it's harder to ask, "Can I talk to somebody else?" Because maybe it implies that that person you're talking to is wrong in some way? I don't know. I find most of my clients who come to me are asking me for my recommendation as a design professional. And so they're open to whatever it is that I suggest, and they're not assuming that I'm just going to execute whatever it is that they've asked me to do.

So maybe that's part of it, is setting up in the first place that, whatever it is, we're just talking about hypotheses in the beginning.

Matt:

I'm happy that you haven't had a deal with this, then. I've dealt with it a few times. Not a ton. I think I'm normally able to have these conversations with people. I think there's a bit of, " Well, I don't pay them to talk to you, I pay them to get their work done." I've seen that at a few places where it's like, "Well, just talk to me. 'Cause I don't want them to waste their time talking to you when they have work to get done.

Alexis:

That's a waste of time?

Matt:

I've seen that before. I've seen," I'm fearful that you're just going to ask them what they want." And again, comes down to that, like what do you think user research is?

And people have in their heads this idea of, "Oh, it's like a focus group, where I'm just getting a bunch of people together and saying, well, tell me...

Alexis:

...do you like this, do you like that?"

Matt:

Exactly. Or like, "Tell me your problems and what you think the solution is." And it's like, "No, that's not, that's not how I do user research. That's not what that is."

Alexis:

No, you're gathering information, right? You're gathering data and you're asking them to tell YOU. You're not asking them for their opinions, so much as you're asking for facts of what's happening. Their opinions, I suppose, to some extent when we're talking about attitudinal, you know, their feelings and opinions. But you're not taking those opinions as fact, and you're not just going to be like, "Oh, now I'm going to do something about this." You just want to gather the data.

Matt:

Yeah. I've had to walk people through, like, "Here's what I do when I do interviews." Like, "This is what a one-on-one interview looks like. And people might tell me their opinion on what the solution should be, but that is thrown into the same bucket as every other opinion that I get on what the solution should be, until we get to the point of figuring out the solution. And I'll take all of those and we'll work together and we'll figure out what the solution is when we get to that phase of the work."

But I'm not taking everything at face value. It is then up to us to validate that in some sort of way. Again, what's then the quantitative side of things? Where do I take this if I need to quantify an assertion that someone's making? But I want to talk to them, right? I want to understand their world because again, they're going to be the ones using it. And if they are really not bought in, I then tell them stories of previous times. Basically, I use like the, "You are putting this project at risk and you are putting your money at risk in having us design a solution, and develop a solution that you do not know will work well for the people that are going to be using it. And here's an example of a time when that happened in the past for me, when I didn't get to talk to people and this is how that went. " So I, that's like my last resort. But I have had that a few times, some people are like, "Nope, just talk to me. I know everything. I'm the manager. I'm the owner of this business. " And you're like, "All right, cool. But... please? Can I talk to your people? Even just a little bit? Can we talk? So, yeah.

Alexis:

One thing I have run into sometimes is people in upper areas not having a lot of confidence in people in other departments sometimes. And it kind of is conveyed through, "Well this, such and such a department just is, is terrible." And I always somewhat reject that, because I feel like very few people go to work and are intentionally doing a bad job.

Maybe they're doing a bad job, but most of the time it's that they don't have the tools quite that they need in order to do the job in the way that other people are expecting them to do it.

Matt:

Yeah, agreed. I think, yeah, a lot of design is assuming positive intent from people and taking that time to truly describe them. We need to think of these people as humans that exist in a world that is beyond just the thing that they're trying to interact with-- whatever you're designing, right? Like you're designing a piece of software that they're using for one small portion of their day.

I, I've similarly been in that same situation of like, well, those people over there, "Okay, don't talk to-- don't trust what they say, because they have their own agenda of a thing," and blah, blah, blah, and you're like okay, maybe they do.

Alexis:

If they if they do have that

Matt:

Exactly,

Alexis:

agenda is

Matt:

do they have that

Alexis:

Yeah. And I might not believe their agenda, but I want to see what is going on. Also, you want to understand the landscape so that you can handle the challenges. It's much harder to design if you're just going in blind and you don't know how people feel.

That's why the attitudinal research is important. You know, "How do people feel about this?" Overall, are they positive and they just have these few little roadblocks, or is their day just one frustration after another? And what's causing that?

There's so many things that it could be, which is why we want to have-- why we want to do this research, and we want to do it continuously at different points because we want to actually know what it is that we're dealing with.

Matt:

Yeah. When you're building for businesses, and again, you know who the users are, you can go talk to them. They're right over there, right? That's where they're sitting.

But yeah, the anonymous audience is tough, and I've dealt with even internal solutions where the audience size is so large, that even like internal to a company, when you're dealing with a population of more than 10,000 people, it's still anonymous at a certain level. There's people you can talk to, but they're not everyone, right? So you have to get really good at screening and understanding who do you want to talk to and how do I find the right people to talk to?

There are different tools out there like usertesting.com. Basically you take these screener things that the researcher puts together and is like, "I'm looking for, a subset of people across these demographics, you know, they have this type of product or not. Do they have a checking account or do they have this or do they use an iPhone or do they have an Android?

Or like, you find those questions

Alexis:

have to have an idea ahead of time you're looking for.

Matt:

And yeah, You have to have an idea and like, and I think in many cases you probably do have an idea. I mean, most product doesn't just come out with like, I have no idea who my customer base is. If you want to understand and create personas like you mentioned at the beginning of who these people are, 'cause that's getting into the definition phase.

I'm now defining the problems. I'm defining the people, which is the end of the research, right? You want to, yeah, go out and figure out who those people are and get some stories and so screened, find the right people. Sometimes people will just search on LinkedIn for people. If you're looking for a certain type of person, you'll search for them and, you know, send them a cold email and be like, "Hey, will you do an interview with us?" Or, "Will you do a thing with us?" But it really comes down to having good screening of, I want to get the right audience that is wide in the right places, but also is going to have the right background for what I'm trying to learn.

. And another thing that we tend to do-- this is actually not just for that anonymous audience, but for any, any larger audience, is find the extremes. Find the people that have been doing it for forever and are experts in it and find the people that are like brand new to it and have never done it before.

You're going to learn stuff on both sides. And you're going to learn very interesting stuff of like how that expert person thinks about it. That might help you figure out how you design to make someone that's new, think like an expert. You'll also then see like, where people that are brand new have issues. So there's all of these ways of like, yeah, how do I screen, what's the wideness of the audience I need to go?

Where do I want to get different perspectives? And screening for those different demographic stuff to find them.

Alexis:

Yeah. And in terms of usability, it's important to design both for newbies and for experts. If you design everything for a newbie, then the experts get annoyed, and if you design only for experts, then new people are lost and they don't know how to work your thingamajig.

Matt:

Yep, exactly. Yeah, finding those two can be tough at times. But yeah, screening is, then the answer. It's tough. It's, it's a lot of work. I'm lucky. In most corporate environments that are like large enough and have large enough teams that we have specific operations people that do that kind of work and make sure you're doing it correctly.

Where if, you know, if I was a lone person by myself, that would be really tough to do all of that stuff to find those people. But you can, there are tools and websites out there that'll help you find people that are willing to do user tests. If you don't have the ability to go to your own clients, people that are already using the product and like screening from them, there are options out there.

Alexis:

Well, that's really interesting. Alright, any last thoughts, about this something we haven't covered?

Matt:

I don't think anything like new, but I think like, wrapping it all up of, again, this idea of research methods... The thing for me to always remind myself and come back to is the phrase you quoted me on of like, "Strong conviction, loosely held." Know that at any point in time you can be proved wrong about what you think the problem is, what you think the solution is, how usable you think the solution is.

And so that's why we do these different types of research, is to validate and learn and understand so that you can be sure you're solving the right problem in the right way, in a way that makes sense to people.

Alexis:

Yeah, it's a bit like the scientific method, right? You have your hypothesis and then you do your experiments to figure out whether your hypothesis is correct or not, and finding out that it's not correct is just as valid as finding out that it is correct. And hopefully, if you do the research, then even if you are incorrect about your original hypothesis, you gather enough information that you can then make a new hypothesis and then go and test that.

Matt:

Exactly.

Alexis:

Awesome. Thank you so much for chatting with me about this, Matt. It was really interesting and I hope that people are inspired to do more research themselves.

Matt:

Yeah most definitely.

Alexis:

Thanks for listening to the Everyone Can Design Podcast. To view the complete show notes and all the resources mentioned in today's episode, visit www.everyonecan.design. Before you go, make sure to subscribe to the podcast so you can receive new episodes as soon as they're released. And if you're enjoying our podcast, please leave us a review.

Chapters

Video

More from YouTube