Artwork for podcast Greenbook Podcast
97 — The Price of Precision: Exploring Costs and Quality in Surveys with Karine Pepin
Episode 9712th February 2024 • Greenbook Podcast • Greenbook
00:00:00 00:36:55

Share Episode

Shownotes

At an average of $0.72 per hour for survey participants, what does this mean for the quality of your data and the insights you depend on?

In this episode of the Greenbook Podcast, we sit down with Karine Pepin, a standout author and market research aficionado, to explore the intricacies and challenges of data quality in the industry. With a rich background in sociology and a profound love for market research from the get-go, Karine shares her critical insights into the systemic issues plaguing online surveys, including participant engagement, incentive effectiveness, and the impact of fraudulent responses. The discussion delves into the necessity for the industry to move beyond short-term, cost-driven strategies towards a future of enhanced data integrity and trust between researchers and participants. Highlighting her journey through the landscape of market research, Karine calls for a reevaluation of research methodologies and the adoption of innovative approaches, all while announcing her exciting new venture aimed at revolutionizing the way high-quality research is conducted.

You can reach out to Karine on LinkedIn.

Many thanks to Karine for being our guest. Thanks also to our producer, Natalie Pusch; and our editor, Big Bad Audio.

Transcripts

Karen:

Hello, everybody. Welcome to another episode of the Greenbook Podcast. I’m Karen Lynch, happy to be hosting today and happy to be talking to our guest, Karine Pepin, who I have interacted with online at least for the last two years here at Greenbook. She is a regular contributor to our article ecosystem. I’m always thrilled to see—she’s actually one of our prolific authors. We’ve published so many pieces by Karine, and each one is a treat for the industry. True thought leadership. Again, such a pleasure to have you on the show today. Welcome.

Karine:

Thank you. I’m super excited to be here.

Karen:

That’s great. One of the things that I saw on your LinkedIn bio is this phrase about how some people fell into market research. For you, it was love at first sight, which is a beautiful statement to your passion and emotion for the industry. If you could start things off by telling you a little bit about how did you fall in love with the industry? Where are you with it today?

Karine:

Yeah. I notice that a lot of people in our industry say they kind of fell into it. It was their third job and somehow they got into it. For me, my background is in sociology. I have a graduate degree in sociology, and I took a lot of research classes in college. When I was introduced to SPSS, I was totally blown away by this whole thing—by the power of statistics, by the power of surveys, by running crosstabs—and being able to say, “Oh, males are more likely to do this.” I knew I always wanted to do this, and that obsession with data quality also started out when I was in college. My very first job was research assistant for the tourism board of my city, and I was responsible for managing their face-to-face surveys with visitors. My role was to coordinate the interviewers who would go out every day with their clipboard—because that’s a long time ago—and they would ask questions to tourists at different landmarks about their experience. My role was to coordinate that and to check the surveys, the paper surveys, at the end of the day to make sure everything looked good. I came across this batch of surveys one day, and it didn’t feel right. Keep in mind this was my very first research job—like any job, to be honest—but something didn’t feel right. I started looking into it, and what felt a bit off was that there were too many people describing themself as homemakers. Now, you’ve got to understand that homemakers got a shorter survey, because we didn’t ask about a bunch of occupation employment questions. Long story short, the interviewer admitted she made these up, and she resigned. I had to throw out all of her surveys, not just that batch, but everything. Because how do you trust, now, that she did a good job before that day? I started over. Then I never really looked at surveys and data sets the same way. Then fast forward to 2016. This is when I started noticing weird patterns in the data from the online sample that we collected. This is not a new problem. Right? 2016. I started wondering, “What is happening here? This looks weird.” I always like to lift the hood, play around with the data set, try to understand what is happening here, because I think if you can’t understand the cause, then you can’t really fix the problem in the future. Then I started writing for Greenbook, started being more active on LinkedIn, and realized there are a lot of people that are into it as well and more even so now than two years and a half ago when I started writing about it.

Karen:

Well, and I think that it’s a true testament of what a true researcher will do is look for those anomalies and try to come up with the root cause of them and what is going on. I know that we talked quite candidly about our last wave of GRIT data, and our Greenbook researcher noticed some anomalies in the open ends that led us to understand what were AI-generated responses. It took really looking at them and not just trying to look at, “Okay. What are the numbers saying?” but really looking at how the data was presented and looking at the quality of the responses. That discernment of a true researcher is a very valuable asset at this present time when fraud is coming at us in many different ways.

Karine:

For sure. When you look at things in aggregate, you will miss things. It’s a respondent-level problem. Right? You’re going to miss things if you just look at a percentage and say, “Oh, it feels about right.” Yeah, but if half the people rated something really high, and half the people rated it really [unintelligible 00:04:57] low, it still averages out to being okay. It doesn’t mean that, at a respondent level, it makes sense.

Karen:

Yeah, yeah. There’s so much we can learn. I’m actually really excited—shameless plug for our upcoming IIEX health event—because there’s a doctor that I’ll be facilitating a fireside chat with who is going to be sharing how he looks at data. It’s definitely with the lens of a medical researcher. Right? A different type of scientist, if you will. I’m really excited to hear that and learn from him about what practices we can put into place in the insights industry. Anyway, a little sidebar shout out for that. We’ll put in the show notes for those of you listening, because that talk I’m particularly excited about. You’ll love it too.

Karine:

Yeah. I think I will.

Karen:

Knowing you the way I do, which is largely through our LinkedIn interactions. By the way, find us both on LinkedIn, because we have some fun conversations. Let’s talk about the study—that actually jumped out at me when you shared online—this research on research that you were participating in as a researcher to try to see what you can learn. Tell our audience what you did, and why it caught my eye will become apparent.

Karine:

Yeah. I started taking surveys as a panelist, and I did not go into this thinking, “Oh, I’m going to take surveys for two years, and I’m going to figure out how much panelists make.” This is not what happened here. Honest to god, I was on Twitter. I saw an ad. I signed up. I got eight panel invitations from that one ad that I clicked on. Eight different panels. I signed up on all of them. I’m not on all of them anymore, because some of them I found better than others, and I stuck with them. I start taking surveys. I love surveys. I really got into it, because I realized there’s a lot of problems with the data that can easily be explained if you understand the sampling ecosystem and the participant experience. That’s why I kept at it, because I—once you lift the hood, there’s so much to—into this. Then I realized recently that one of the panel had a very good payout summary. This was when I started digging into, “Well, how much did I make? How much does that work out to be per hour?”

Karen:

Yeah. I think what’s interesting—another slight sidebar—but what’s interesting about the process that you did is you really were thinking to yourself, as somebody who is in the field of putting surveys out there, “I would like to understand them from the inside out.” My background is qualitative, not quantitative, so in the qualitative space, we often would watch journalists, for example, to see how do they interview people on television. How does this interview manifest in a written newspaper article or magazine article? What other areas can we—can we watch how news—on-the-street news interviewers approach their interview? We learn from really stepping in the world in different ways and interacting with our medium—which was the qualitative interview—from different angles and different perspectives. If I ever had the chance to be interviewed, I would take it, because I wanted to see what it felt like on my side. It’s a strong approach to say, “Let’s experience this,” because then you can embody the survey a little bit more. Right?

Karine:

Yeah. For sure. It’s not even something that you can understand if you sign up and do it one time. You have to be at it for every day for a while. I know from my payouts that I took surveys 70 days in that year, so 70 sessions. Right? I’m still learning. I can’t say that I do that many. Right now I’m doing other things, but there’s always something to learn.

Karen:

Yeah. I think that’s a great—a great segue into what did you learn? Right? You learned a lot. Let’s go layer by layer. What are some of the learnings in addition to how much somebody—you might’ve learned per hour if this were monetized that way?

Karine:

I went into this with very low expectations. Right? Because we know that respondents or panelists are not making a killing there. I was still surprised that I ended up making 72 cents an hour once I did all the math. I was surprised for two reasons. One is most panel have a point system. You earn points, and then you can redeem them into gift cards. That is not very transparent. You don’t know how much you’re earning per survey or when you screen out. You know eventually you’ll get your gift cards. I was surprised at how little I earned. The other thing that surprised me is the system is also designed to [unintelligible 00:09:55] complete. It doesn’t take into account the time you’re trying to qualify for a survey, which is not little. These screeners are not short. There’s the router thing that you have some questions to answer there. You get into the survey. Then, at a minimum, you have demographics. You’ll have some qualifying questions like usage. Then sometimes—and often—you’d get some random [stuff, too 00:10:17], that people want to size whatever they’re trying to do. Random PPIs that have nothing to do with you screening into the survey. You’re 5, 7, 10 minutes into this, and then you screen out. It’s not just the 50 cents that you make when you complete. Then, if you consider how much time you’ve wasted trying to get that 50 cents, this is what is pretty shocking, because you’re not really paid for this. I can tell you I was paid for screening out between one and four cents. Yeah.

Karen:

Hmm. Interesting. The thing is, I want to—I want to take a minute and go back. The very fact that these are called incentives—we are trying to incentivize people participating in studies. That we’re not looking for this to be their income, but we’re trying to encourage participation. It begs the question of how much is it worth to an individual for their participation? How much is an hour of my time worth or your time worth or a product consumer’s time worth? It feels like the very word incentive is not necessarily active anymore. We’re not really incentivizing people. Anyway, that could be a—that could be segue there. Is it really worth it, and is that what participants are thinking?

Karine:

The other part is—so, I screened out 7 times out of 10—right—so, 68 percent of the time, I screened out. That’s when you’re not trying to lie to get into the survey. Right? Success rate is only 32 percent. That is also very discouraging, I think. I have had friends who signed up on surveys, on panels, and that is their first observation is, “This feels like a scam. Because I’ve been at this for 20 minutes now, and I’m not qualifying for anything.” I think most people might try it out, and then they might just leave. I don’t know what the right amount is. I don’t necessarily think it should be minimum wage or anything like that. It’s more like a side hobby sort of thing. As long as we don’t pay people for their time or for the number of questions, then we’ll never be committed to improving that experience. Because screening out is practically free. If you compare that to a profiling survey—so, I earned five cents for a profiling survey, which is more than when I screened out. It’s a very short-term view, I think, that we have. Right? We’re not trying to build something sustainable for the future when we have a lot of great profiling data, and then we can target people. We’re just trying to make money on this job. I don’t know how much—this is a really tough question—but the incentives are probably not really a motivation right now for people. It’s not really making them want to stay in that system.

Karen:

Yeah, yeah. For sure. You mentioned gift cards, and I think sometimes, about myself, it’s—when I have these conversations, and I imagine what would it take for me—and I am—I am truly happy for a Starbucks gift card. I’m not going to lie. I love the idea that, “Hey, I’ll do whatever you want. Just buy me a cup of coffee.” I still buy into that concept. Right? Maybe that’s not for everybody. Maybe the gift cards are truly compelling for some people. What are some other things that you came across that are beyond the point system that earns towards gift cards? Anything you either came across or any ideas you have for what can be done to make this a more robust process for the participants?

Karine:

I think some people have tried donations. On the panels that I’m on, I don’t know that it’s an option. There’s a lot of different gift cards you can have, you know, Amazon or Starbucks. I don’t think there’s donations. I think ultimately, though, cash is king. I don’t know if donations work or other things.

Karen:

Yeah. I think that’s probably very unique towards the type of study. Right? If I am speculating now—this is not based on any study that I am currently a part of—but if I were talking to an audience, for example, doing some survey research for sustainability work or something like that, well, then maybe that’s—goes hand in hand with cause-based donations. I think maybe it’s—maybe it’s more appropriate if I’m doing a consumer packaged goods study, then would I actually like something that will allow me to engage in that particular industry myself? If I’m doing some sort of financial services, well then sure wouldn’t mind a discount on something. Maybe it’s catering the incentive to whatever the type of study that you’re in as one thought. I think the bottom line is how to—how to enhance the participant’s experience right from the beginning, which is we value you, and we value your time. Do you have any other thoughts on what your journey in this process revealed as far as the participant experience?

Karine:

Yeah. I think the biggest pain point is dismatching to surveys. If were able to match people to surveys faster, then they wouldn’t be wasting so much time. There’s more to it. What I’ve noticed in my data is when the sample is really well-targeted, you actually get better data, because people are engaged with the topic. Right? They’re more thoughtful in an open end. It’s more relevant to them, so there’s definitely a benefit there beyond not having some sort of annoying experience for them and wasting all this time. I think that’s a—that’s a big one. Then the other thing is we need to talk about surveys, because we’re just talking about surveys. I often hear that the surveys are terrible, and the surveys are not great. To be honest, it’s not the worst part of this process, I think. The first thing I notice when I started this journey and taking surveys is how dated surveys look. If you compare this to any user interface of any websites or apps that you use on a regular basis, it looks like a time machine to 1999 or something. What’s frustrating is that I know there’s some very nice survey platforms out there. I use them. The mainstream out there, that’s not that. They’re platforms that have been around for a long time, and they have all the features you can have. They’re very functional. Okay. They have Conjoint. They have [Leesfield 00:17:00]. They have everything that you need. They’re just not pretty. It’s like there’s been no investment in UI. I’ve started designing my surveys in PowerPoint myself, because imagine if we spent as much time on surveys as we spend on client deliverables, on reports, on proposal. They would look so much better. It’s so hard to do this. You need to have a very patient programmer, because you can’t do-it-yourself tool for that. Then you go in like you would design a report with icons and people. You have a more conversational tone. I think it’s not that hard. Right? You can automate this. You need to have nice templates. I don’t think there are any UX designer that have ever tried to design surveys. Their programmer is more like, “Okay. Well, the logic is working, so there’s nothing wrong with the survey.”

Karen:

Yeah. That’s a very interesting point. Certainly in our industry, where we do have UX researchers and UI researchers who are working on doing the surveys to see how nice a platform—the platform interaction can be or how a user might actually experience going online to a site or via an app. It’s really interesting to apply that same designer expertise to the surveys that are not necessarily just user research. It’s a very interesting concept. I like it very much. For all of you UX researchers listening in on this, let that simmer up there and see if you can’t maybe try to encourage that. I think it’s a really cool thought, Karine. Thank you for sharing it. Any other findings before I dig into—back to being a participant and what that was like? Any other higher-level findings coming out of this work?

Karine:

I think one thing about how the whole system is designed is that we designed this system that you can only be successful in it if you lie and if you have the volume. Right? You’re not going to go in for 50 cents a day. I earned $51, and it took me about 300 survey attempts to do this. Okay? That’s the proportion we’re looking at. Anyone who wants to make this worth their while is encouraged. Maybe if they had good incentives to begin with—I don’t know—but you’ve got to get into it. You’re like, “Well, I’m the decision-maker on this, I guess.” We’ve designed this thing, and now we’re—I don’t know how we go back. Right? Until we are committed to validating the person’s identity—right—and to not encourage them to do a lot of surveys, then they will never get out of this.

Karen:

I think what you’re—what you’re talking about here is the idea that—it’s all connected—the idea that how much is somebody’s time worth. I think, if I was ever asked to participate in a research initiative, and it was not going to give me an incentive of a few hundred dollars, for example—if I was going to do an interview, and it wasn’t—I wasn’t paid professional executive rates or something, I’d be like, “I just don’t have the time for that.” What we’re talking about is making participants feel like, “Oh, yeah. This would be nice extra money to have on hand.” We’re not talking about them making a living. We don’t want that. We do want people to feel that, “This would be good if I’m qualified,” instead of trying to say, “How do I qualify? Because I’m a little desperate for this income.” I think that’s where—and there’s so many things that—I know that some of you have followed right now where I’ve been talking to Kerry Hecht and Mickey Hill about the idea that there are people who are actively learning how to be deceptive in this space. They’re learning how to be dishonest so that they can qualify for more. That’s very different. Overstating how much they use a product, for example. That is not the spirit of why companies suggest we talk to people who have used a product three or four times a week. We don’t want people to overinflate just to qualify. We want to talk to people who are absolutely heavy product users. Maybe we’ve done a poor job explaining to people why the criteria matters, why the decisions made on the other side are actually valuable and not invaluable. It seems that there’s more that we can do beyond the incentive program but about, maybe, our transparency, why we care about heavy usage versus light usage versus no usage and non-users. I don’t know. I think that the whole—the whole ecosystem needs everyone to take a hard look at to try to say what else might be contributing factor to this level of fraud and poor data quality. Did you have any other thoughts about what else we can be doing?

Karine:

Well, I think you have a head start with qual, because you have more of a relationship with that person. People are so anonymous—right—on these online panels. It’s very difficult, I think. I think there’s a huge lack of trust, to be honest, because you might be recruited on one panel. Right? You’re like, “Okay. I signed up for this panel.” Then, the next thing you know, you’re thrown into some router, bounced around, and then you end up somewhere. I think what this does is that both the panel and the participant feel anonymous in here. Right? The panel doesn’t really feel responsible for the experience, because sometimes they will be their panelist [answering 00:22:52] the surveys, sometimes they will not. Then the panelist knows there’s really no way to track them down. There’s really no way to hold them accountable that much for their bad surveys. I think this anonymity and this lack of trust in general in the system is one of the things that contributes to fraud as well.

Karen:

Yeah. It’s like a chicken and egg scenario of which came first. Is it that we need to be more honest with participants so that they are then more honest with us? We need to trust them in order to reveal more information? It’s really quite cyclical about how do we trust one another in this space? How do we re-get to a place where there’s trust between both parties? Meaning the one initiating the survey and then the one we’re hoping to participate in the survey. It’s a quandary.

Karine:

Ultimately, this will take time, and this will take money. It’s a long game, and we’re not really playing a long game. We’re thinking of it as job. This is the project. This is the job. I’m just going to get this done.

Karen:

Yeah. It’s interesting you bring up money, because certainly Lenny Murphy and I have been talking about the fact that these efforts will cost more. It’s going to be really hard for people to wrangle the fact that good quality data, we have to invest in it. Therefore, prices are not going to lower. It’s going to become more and more expensive to put some measures in place to ensure data quality. Prices may go up for research, because—just because you might think that we have economies of scale with some AI support moving forward, we have to pay more money. It is going to become more expensive to do the research at hand. It’s going to be an interesting challenge that the entire industry has to face. We have to stop chasing low-cost provider, low-cost sample, low-cost recruitment, low—we can’t go after it being cheaper. We have lean towards quality in the trifecta of faster, better, cheaper. We can’t sacrifice quality. That’s a lesson that we’ve learned in many ways the hard way.

Karine:

I think we’re already paying for it, though. We just don’t know. Right? Because there’s all these hidden costs. All the time that we spend cleaning the data is not free, but because we’re so focused on CPIs that we completely lose sight of all of the other costs that poor data quality leads to. The obvious one is obviously the researcher’s time. The biases that we introduce in the data too—right—when we sit here, and we’re like, “Oh, is this good enough? Is this a real person? Is this not a real person? Okay. Now I’m going to remove this person.” I obsessed with this, because I don’t want to create more biases. Right? I want to make the right call. Someone is really not paying attention or someone is really [click farm 00:25:38]. Then there’s also what’s the cost of bad business decisions? What’s the cost of doubts? Right? There’s so much more than the CPI. As long as we focus on this thing and on the CPI, we’re not seeing the big picture.

Karen:

It’s true. Those two things. The cost of bad business decisions based on data that isn’t clean or that is accurate, that is not of the utmost quality. I don’t think you can put a price on that. It could be catastrophic if that data isn’t clean, and I—and I do feel that, in many proposals that are sent out, more emphasis in the proposal process needs to be on efforts being taken to make sure that the data is of quality. I know that, again, yes, I was on the qual side. I also worked for a full-service company for a while. I think, looking back, I wish I could go back and redo some of those proposals to include that as a feature benefit of working for that company was the fact that we were working hard to mitigate the risk of bad data. Anyway, I put that out there too. I think that can be talked about and is a proof point for a higher, perhaps, bid in that proposal.

Karine:

Well, what I find challenging is that everybody says they have the best data quality. This is a really weird situation to be in, because I know—I think I have better data quality, because I care so much. Right? It has to be better. Then someone else is going to say, “We have better data quality. We also use a fraud-detection software. We also do this. We also do that.” Unless you’re some expert and you can really tell, “Okay. What is marketing, and what do they actually do?” The other thing that you can’t really tell is there’s a lot of decisions that you make in this process. Right? You can have a good fraud-detection software. Doesn’t mean that you have it on. It doesn’t mean that you know your threshold is not two percent. Unless the buyer is very savvy, how would they know this? We’re not really good at self-regulating ourself. At what point the industry says, “Okay. Well, this is lying. You’re not doing this, and this person is doing the right thing.”

Karen:

Yeah. It’s an interesting quandary, and I feel like my call to action for everybody listening is take a look at your practices. See what’s in place. If you are a buyer, take a look at your partner’s practices. See what’s in place. Have that conversation—right—before the project begins. Have the conversation in advance. I think one of the habits of highly successful people is to being with the end in mind. Well, let’s launch all of the projects getting a step ahead of that saying, “What do we need to do to ensure that the quality of our data is spot-on to help inform those business decisions that we want to make as a result of this work? What are the steps we need to put into place to make sure that, at the very end of the day, we have a research project and research results that we feel really confident in and good about?” Anyway, I’m sure I’m standing on the grandstand there with that as a—as a not-current research practitioner. I’m sure you have more advice as well for the industry. Let’s go there before we wrap. What are high-level takeaways and your call to action to the industry?

Karine:

Well, I think we’ve been talking about data quality a lot—right—in the past two years. I think that was the goal was to raise awareness. I think we need to now move to solutions. I’m ready to move to solutions. Awareness is done now. We’re beating a dead horse. We know what’s happening. We know it’s bad. What are the alternatives? Right? There just aren’t many. On the B to B side, you can do a custom recruit. It will cost you money, but your data will be beautiful. There’s going to be barely any cleaning to do. The question is what do you do on the consumer side where we’ve been used to getting 2000 of everything, because it’s so cheap, so why not? We’ve created all this demand out there, so much demand. Can we start validating every single person? I don’t know. I know I’ve turned to other sources that are not traditional online panels for consumer research as well. That’s the question is how do you scale this model? Not necessarily so that it’s not $200 per respondent. Obviously, we can’t pay that. It’s also the speed. Because I’ve looked at CRN sample and panel sample. Right? The real people that are not panelists are very different. If there were a system with only real people and no panelists, I don’t know that this is actually going to work for industry, because the real people don’t lie, so they screen out a lot. Right? The real people are slower at taking the surveys, so your 20-minute survey is 26 minutes for them. Okay? Because they don’t know the patterns, the question, and they drop out a lot. They have very little patience for this. Okay. Can we find 2000 people not in the mindset that they’re doing this survey about ketchup or whatever? I think that’s also going to—we’re going to run into problems with visibility. I think we’ll need to turn to more passive behavioral data where we can. I don’t know if that will want to open the whole synthetic sample. I’m not an expert on it, but those projects where it’s a good fit, it might actually help decrease all the demand on panelists. Right? Which is good, ultimately, if that does the job.

Karen:

Yeah, yeah. It’s so funny you brought up—I feel like I just commented on something else also on LinkedIn today about right now everyone’s really worried about AI and synthetic data, and I’m like, “I think we skipped over our worry about data quality in human—in the human factor.” [Laughs]

Karine:

I know. [Laughs]

Karen:

Let’s remember we had this other issue to deal with also.

Karine:

I know. I know. We’re not comparing here amazing data quality to AI. That’s not the comparison. I think there’s a role to play for everybody here.

Karen:

More pressure on the researchers to be savvy and to really know what they’re doing and to really have a skill set that involves critical thinking. Anyway, it’s so great to talk to somebody who has all of that. I appreciate you being here for sure. Any questions that—or topic areas that we didn’t get to before we start to wrap? Do you want to share anything with our audience that wasn’t on the brief or that you have coming down the pike for yourself? This is a great time for you to bring up some other things.

Karine:

Well, as we wrap up, I’m excited about what’s coming up in 2024 for myself, because I’m starting something new with a qualitative colleague who is equally passionate about quality in research as I am. We’re excited about that. We’re still figuring out exactly what the offering is going to be, but we both have 20 years of experience, so we can do any hard studies. There’s nothing we haven’t seen before. We can do the whole full-service thing. We like to call ourself a non-agency agency, because we want to be agile. We want to be flexible, and we really want to help in-house researchers. We feel there’s got to be some wide space between—we’re doing everything internally. We can do it, but we’re at capacity. By the time it’s time to actually socialize the data, work with stakeholders, we’re tired. Right? The other extreme being here’s $200,000 to run this. There’s a way to evolve this full-service model when you’re agile, when you don’t have all the overheads, and everything else. I think this is where we’re going with this, and we’re really excited to see how we can evolve full-service.

Karen:

That’s great. I’m excited to watch it unfold as well. Good luck to you in that venture.

Karine:

Thank you.

Karen:

I will be paying attention. I look forward to learning more through all of our social channels. What a pleasure to have you here today.

Karine:

Today was great.

Karen:

Such a great conversation. Yeah. Anything else that you want to add? Anything else you’re looking forward to besides your new business venture? Anything else that 2024 will bring your way?

Karine:

Oh, god. I’ve committed myself to reviewing all the academic papers about data quality. I want to see exactly how do other fields do it? What do they consider good? Have they done tests to—has there been anyone doing a test to know if straightliners really work? No. I don’t think we’ve explored this enough. That is something else that’s on my agenda. I’ve committed to spending 100 hours doing this, which I figured 2 hours a week. No big deal.

Karen:

Yeah. Totally doable. [Laughs]

Karine:

I’m already four weeks behind here. [Laughs] This will happen. This will happen for sure.

Karen:

That’s great. That’s great. Ladies and gentleman, that is true thought leadership, somebody who has such a growth mindset that they’re like, “And, by the way, I will also take on this task of feeding my brain with academic papers.” Karine, such a pleasure to talk to you today. Thank you so much for joining us.

Karine:

Thank you so much for having me.

Karen:

My pleasure. My pleasure. I also want to shout out to Natalie Pusch, our podcast producer. Thank you, Natalie, for everything you do, for this show. To our editor, Big Bad Audio, and, of course, to you our listeners, thank you for tuning in each week. I love doing this. I know Lenny loves doing it too. We do it for you and because of you. Thank you. Have a great rest of your week, everyone.

Links

Chapters

Video

More from YouTube