Artwork for podcast Barking Mad
Think Twice: Even Bad Science Can Get Published!
Episode 3530th October 2024 • Barking Mad • BSM Partners
00:00:00 00:47:03

Share Episode

Shownotes

Do you have trust issues? We do, too. There is a lot of research out there, both good and bad, and the difference between reliable and not-so-reliable research typically comes down to one thing—study design. Using recent real-world examples, like the first-ever retraction in JAVMA’s 180-year history, join us as we expose the critical differences between well-designed studies and those that fall short.  

Related Episodes

Incidence episode: https://bsmpartners.net/barking-mad-podcast/a-mountain-or-a-molehill-what-science-really-says-about-the-incidence-of-dcm 

Lit Review episode: https://bsmpartners.net/barking-mad-podcast/pawing-through-the-research-uncovering-the-fatal-flaws-of-DCM  

Examples of Poorly Designed Research

More on the JAVMA retraction: https://www.petfoodprocessing.net/articles/18426-unraveling-the-copper-controversy  

The Effects of Feeding Pulse-Based, Grain-Free, Diets on Digestibility, Glycemic Response, and Cardiovascular Health In Domestic Dogs: https://harvest.usask.ca/items/a96ed26b-7cb5-4179-8308-20f67e7a9ffa  

Development of plasma and whole blood taurine reference ranges and identification of dietary features associated with taurine deficiency and dilated cardiomyopathy in golden retrievers: A prospective, observational study: https://pubmed.ncbi.nlm.nih.gov/32413894/ (Check out the Expression of Concern for this one: https://pmc.ncbi.nlm.nih.gov/articles/PMC7480844/)  

Taurine deficiency and dilated cardiomyopathy in golden retrievers fed commercial diets: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0209112  

Responses in randomised groups of healthy, adult Labrador retrievers fed grain-free diets with high legume inclusion for 30 days display commonalities with dogs with suspected dilated cardiomyopathy: https://bmcvetres.biomedcentral.com/articles/10.1186/s12917-022-03264-x  

DCM Research by BSM Partners

Blood parameters research: https://onlinelibrary.wiley.com/doi/10.1111/jpn.13946  

Incidence paper: https://www.frontiersin.org/articles/10.3389/fanim.2022.846227/full  

Board-Invited Literature Review: https://academic.oup.com/jas/article/98/6/skaa155/5857674?login=false 

Show Notes

00:00 – Welcome to the Episode 

03:22 – Breaking Down the Research Pyramid 

09:10 – Considering All the Factors (And Controlling for Them in Research) 

10:00 – What is a Literature Review, and Why is it Important? 

12:47 – The Peer-Review Process and Fatal Flaws 

17:07 – The FDA’s Announcement: Biased Questions Led to Biased Answers 

20:46 – FDA Saves Face in Subsequent Updates 

23:18 – What Goes into a Well-Designed Study? 

28:20 – Nothing is Impossible, Some Things Are Just Really Hard! 

32:16 – Examples of Where Research Fell Flat 

35:24 – Blood Parameters don’t Tell All 

38:36 – Don’t Demonize Ingredients, Instead Study Nutrients 

41:19 – Full Transparency: Potential Limitations in BSM Partners’ Research 

44:17 – Conclusion and Farewell 

Transcripts

Jordan Tyler: Do you remember that Facebook post you made when you were 16? Yeah, you know the one. You may think you've deleted it from the face of the internet, but are you really sure? The truth is, the internet is a deep abyss of information and misinformation, and once a post or a rant or an idea or a mistake is uploaded into this abyss It's now out of your control.

My point here is, the internet is forever, whether we like it or not. And that embarrassing teenage post is likely still floating around out there in a sea of true and false tidbits of knowledge. Ultimately, the choice is ours to differentiate between good and bad information. And this is true for the research world as well.

Not all research is created equal. And the way a study is designed has a significant bearing on whether the data gathered, and conclusions reached are actually sound. Take the recent retraction in the Journal of the American Veterinary Medical Association, or JAVMA, as an example. Several authors, currently employed by Hill's Pet Nutrition, submitted an article to JAVMA for their March edition.

as an op-ed to the end of May:

Ultimately, this dialogue resulted in the first ever retraction in JAVMA’s 180-year history. Concerns raised in these letters centered around erroneous copper quantification methods, a lack of diet history information, and the inclusion of misinformation, as well as the extrapolation of data from sick animals onto healthy animals.

And these flaws ultimately led researchers to unreliable conclusions that could have significantly misdirected future research into the topic. In today's episode of Barking Mad, we'll be demystifying this research process, discussing the difference between retrospective and prospective research, and highlighting good and bad study designs, all in an effort to empower you, the listener, whether you're a pet industry professional or a pet parent yourself, with the critical skills needed to differentiate good research from not so good research.

Stephanie Clark: Welcome to Barking Mad, a podcast by BSM Partners. We're your hosts, Dr. Stephanie Clark.

Jordan Tyler: And I'm Jordan Tyler. Okay. So, let's give a little background on the research process, the different types of research, and from my understanding, scientific research exists kind of like a pyramid.

You've called it the research pyramid. I guess essentially there are different levels of control and therefore reliability of the findings of a study. So, could you explain a little bit, are you Stephanie today or are you Dr. Clark? Which one do you want to be?

Stephanie Clark: Whatever you want.

Jordan Tyler: Okay.

Stephanie Clark: I'll be Bat Woman today. No, just kidding.

Jordan Tyler: Wonder Woman, would you care to comment?

Stephanie Clark: It's up to me to save the world one research pyramid at a time. So ,I feel like way back when we used to have the health or like the food pyramid, and now we don't. We've switched to like the food plate, I think. So, the research pyramid is It basically parses out different designs of studies based on how controlled they are, or how repeatable or reliable a study can be.

And each part is important, right? It all adds to the scientific database of knowledge. You have the bottom of the pyramid, and those are kind of our no design, case studies, case reports, possibly narrative reviews, or editorials. So, they're not really based on anything. They haven't collected data, at least enough data to draw much conclusions.

They're just noting something. Then we kind of have like the middle section of the pyramid, and this is where observational studies come in. And we can have retrospective-controlled case studies, like we saw with like the DCM alert. People were noticing and kind of going back and drawing on these dogs that were diagnosed with DCM and looking at their history.

Then we kind of elevate that to prospective controlled studies. So, let's design a study. Let's control some factors. A lot of researchers love controlling everything. I do too. And then at the top is kind of what we call meta-analysis, which is a review of all these research papers to come up with a collective conclusion at the end of the day.

Because you can't just really draw one conclusion from one paper and say, “Yep, this is it.” What happens if we repeat this with a different group of dogs or a different group of people? So, it gets kind of confusing because retrospective versus prospective, what do those even mean? They're big words.

I'm like, yeah, I know retro. Like, I like to dress retro, but how does that apply to research? So retrospective studies are things that already happened in the past and you're looking at them. So, I created this little example while I was trying to think of how best to describe a retrospective study.

And so, you've got this small town, and they eat carrots from this particular field, and it causes their eyes to turn orange. Now, this is totally made up. There's no real small town. There's no real orange colored eyed people out there.

Jordan Tyler: If you guys are out there, let us know, because we want to talk to you.

Stephanie Clark: Let me know if I predicted your outcome correctly. So, first, we would basically round up everybody with orange eyes and say, “Okay. Let's look at what you ate, what you did, your lifestyle, and see if we can discover some sort of commonalities between these groups of people.” And so, say you were looking at their diet, and everyone ate carrots.

And you're like, “Oh, perfect! Maybe carrots are the cause of these orange eyes.” But there could be other factors. Like, maybe they drank chocolate milk on Saturday. And so how do we know if it's carrots, chocolate milk, because it's a Saturday, or all three? You don't. You can't isolate it. Um, and that's where prospective studies come in, is where you get to isolate these individual factors.

So, if we wanted to isolate carrots, we would basically feed a group of people carrots from field A and carrots from field B and see if their eye color changed. Now, if we wanted to try and see if it had anything to do with chocolate milk, we would add chocolate milk to both. So, at the end of the day, you've got to really isolate these factors in prospective studies to really draw a conclusion.

And if we go back to DCM, it looks like it could be grain-free diets based on their dietary history, what they were eating. But if we actually start digging in deeper, these are predisposed breeds. They're older dogs. A lot of them were males, and these are all risk factors of DCM.

So, was it truly that they were eating a grain-free diet? Was it because they were eating an unbalanced diet, meaning the owner was not giving them enough food or more table scraps? Is it just a perfect storm of genetics and environmental factors? And then the food, does it have anything to do with the food?

Maybe it's just all genetics, TBD on that for upcoming episode, but that's why retrospective is important. You get to draw some commonalities, but without the prospective data, you can't really isolate what is the cause.

Jordan Tyler: I think that's really interesting though, because we talk all the time about how complex pet nutrition is, and it also comes back to my dogs eat things.

That isn't their food all the time. I don't want them to. I certainly don't want them to. But sometimes it just happens and it's over before I can intervene. And I'm like, “Well, that's I guess part of your diet now.” And so, really just all, talking about all the different factors that go into not only what goes into a dog's body, like what they consume, but like all the environmental factors and all the lifestyle factors and the genetic factors, it just all compiles.

And so, to really boil it down to one thing, this is where good study design really comes in, where you can control each of those factors and really try to get to the bottom of what's the root cause of this issue.

Stephanie Clark: Oh, absolutely. And that's why researchers. Again, we're control freaks. We love to control as much as we can, because we have to.

It's how we are made. If we don't control it, how do we know if it's carrots or chocolate milk or both? So, it's really important when trying to, to develop a cause and effect.

Jordan Tyler: Yeah. And I did want to come back to, you were talking about the tippy top of this pyramid is the meta review. And I know in another episode, we discuss the idea of a literature review.

And it kind of reminded me, a literature review is essentially just looking at all the research that's already been published about a particular topic and doing an extensive review of all that scientific literature to determine where are there gaps? In the current research that we can explore with new prospective research.

So, it's kind of like leveraging and this might be wrong. So, feel free to correct me. But this is almost kind of like leveraging. You have to leverage both the retrospective and prospective to really fully understand whatever it is you're trying to research.

Stephanie Clark: Yeah, literature review, you're gathering as much data as possible.

Did someone look at genetics already in the past? Did someone look at grain-free, grain-inclusive? I'm picking on DCM right now because that's the closest one that we have done a bunch of research on. Then you've got your meta-analysis or your meta-review, and that is actually pulling in everyone's studies and then running your own stats on that.

So, based on the collective research group and based on all their findings on what they've isolated, the different dogs, the different groups, the different seasons that they've ran the studies in, how does that play into an effect of, “Is there a cause and effect?” Are there a couple of things playing into this?

So, that's why a meta-analysis or a meta-review is a little bit more rigorous than a literature review is that stats portion to really figure out, is there a significant difference?

Jordan Tyler: Yeah, and that sounds like something like that would require a really high level of collaboration as well. If this research, the previous research has been done by a bunch of different parties maybe we all come together and share the data and come to a conclusion that benefits not only us, but pets and pet people.

Stephanie Clark: Yeah, ideally, in a perfect world, that would be great. And I'm not sure. Why don't we do it more. Maybe it's the funding. Maybe it's the lack of collaboration. I'm speculating here, drawing up drama, but we should be doing it more. And I mean, really at the end of the day, we're all in pet [industry] because we love pets, and we want to do the best by them.

So, to me, it just seems very natural to collaborate. To make sure that we're giving our animals the best nutrition or understanding diseases and the disease processes. So, we can better help our pets.

Jordan Tyler: And really quick while we're on the topic of all these different types of research, kind of the end goal or the happy ending for a piece of research is it's submitted to some kind of scientific journal and its peer reviewed.

And I wanted to just give our listeners a chance to understand maybe what the difference is between some of the bottom of the pyramid research, op-eds, things that haven't really taken an in depth look at a topic versus some things really at the high level of that and, and where peer review plays into that why it's so important and what credibility does it lend to a piece of research.

Stephanie Clark: Yeah, so, the gold standard is a published peer-reviewed paper like that is gold standard. That's what all researchers are shooting for at the end of the day, or we should be at least getting our research out there. To share it with the community to inform people, but after you've done all the study design, after you've done the actual research, you've analyzed the results, you've written this long in-depth paper. You're not done yet.

Stephanie Clark: You still have a little bit more to go, and that's the peer review process. And this allows experts in the field to review your work. We always call it, a good reviewer and a bad reviewer, but really they're just looking at different things.

So, quote unquote good reviewers. Those are the people that are like your paper is wonderful. You did great. Awesome job. Just fix this sentence, add a period and you're like sure that's great. I can fix this, and I can move on with my life. And then you have the quote unquote bad or tough reviewers that look at things a little bit more with a critique.

They may challenge your stats. They may challenge your study design. Something that you can't go back to and fix. Something that you can't go back to and readdress because you've already done the study. Some reviewers and some editors call that a fatal flaw, so if you've missed a huge thing in your study design and it leaves any kind of holes or potential issues with your actual research and the data you collected, they can just reject it flat out, reject it.

And so, you've gone through all this study design. You've gone through the research, the data collection, analyzing, writing the paper, waiting on bated breath for those reviewers and it could get rejected.

Jordan Tyler: Which sounds like it would be pretty devastating after all that hard work and reminds me actually of a conversation we had with Dr. Anna Kate Shoveler, a researcher and professor at the University of Guelph, which we actually released in another episode today, in tandem with this episode, that one's about literature reviews specifically, which we touch on a little bit here, but in the other episode she explains Fatal flaws as quote unquote killer issues or essentially an oversight so significant that it could kill a paper dead in its tracks.

Let's hear from her.

Anna Kate Shoveller: When you are reviewing, I think if reviewers would remember that their job is to decide whether there are killer issues and those would need to be very clearly defined. So, in science, we would call that a fatal flaw. That usually comes in the form of a design flaw. Now, we're all nutritionists and or veterinarians, but focused on nutrition is what I do.

So, while it's not just the gross design of the project, if they designed their test diets wrong, that's rejectable to make sure that they're focusing. Their diet design based on interrogating the objectives and hypotheses of the study is also a potential killer issue.

Stephanie Clark: So, it really, I feel like I'm going back beating this to death, but it really goes back to that study design.

Have you done your due diligence? Have you looked at everything? Have you controlled as much as you can, even selecting how many animals, and what parameters, it really all plays a role. But if you do a literature review, you can kind of figure out those holes along the way.

Jordan Tyler: So, as we're talking about study design and fatal flaws, we've stayed really close to the DCM issue.

I've stayed really close to it as a journalist in the pet industry, and now with BSM Partners, we have done a ton of research and spent a lot of time and resources trying to get to the bottom of this issue. It's kind of just taking me back to that era and really my understanding of what happened. And so, when the FDA made its announcement that a perceived increase in the cases of DCM in dogs could be associated with diets and specifically with grain-free diets. The information that they used and gathered to come to this conclusion was…

Stephanie Clark: Premature.

Jordan Tyler: It was, it was goaded by leading questions that led to biased answers. And we really expound on this in a previous episode, which we will link in the show notes. But you can't just take the data at face value. You have to see how it was collected and where it came from and under what circumstances it came from because if you are, hang on, let me think of an example.

Stephanie Clark: I was going to say, if you ask a toddler, do they want a cookie or do they want broccoli? Only let me know if you're hungry, if you want a cookie. They're going to be super hungry all the time. But if I told her, let me know when you're hungry and we'll get broccoli, Homegirl's going to starve to death.

Like, she hates broccoli. And so only asking pet owners if they’re feeding a grain-free diet, let me know. But if you're feeding a grain-inclusive diet, don't let me know. Of course, you're going to get all of them letting you know who are eating a grain-free diet. You asked for it.

Jordan Tyler: Yeah, that's a really excellent example.

Stephanie Clark: I was just trying to like think like, how do I get Esther, like I cannot get her to eat dinner, but then all of a sudden she's ready for a snack.

Jordan Tyler: A girl after my own heart.

Stephanie Clark: But I mean, going back to your point. If we want to go back to the research pyramid, people were seeing incidents, or they were at least noticing it more.

So kind of going back to our example, we noticed a bunch of people with orange eyes. They noticed a bunch of dogs with DCM. We went back and we reviewed their history, their medical records and said, I think this is a commonality. It looks like they're all eating similar things, but when they reported it to the FDA, they should, and unfortunately it was just premature.

So, instead of actually looking at genetics, breed, age, pre-existing conditions, we just jumped straight to this. And unfortunately, food always gets picked on, but like Nate likes to say, “Every dog who dies, every cat who dies was eating some brand's food.” So, does that mean it's bad or is that just what they need to live?

Jordan Tyler: I mean, yeah, that's brings up a really excellent question. It just comes back to there are so many different factors that go into our pet's health and well-being. And some of those we can't control, like genetics. But we can control nutrition. And so, hmm, let's blame that, right? So, it's just about really getting a full scope and considering even if it would be more work or more money or more time or maybe contrary to your beliefs or your opinions.

That's what research is all about. It's all about getting to actual science-based, fact-driven conclusions that can help us understand the world around us, including what we put in our dogs and cats mouths.

Stephanie Clark: Yeah, and that's why the FDA later on, a few years later, after a bunch of research came out, actually withdrew it and said there's not enough research, there's not enough data to support this, because when we started isolating different factors, we couldn't just isolate one diet or one ingredient.

n Tyler: Oh yeah, this was in:

For emphasis, I repeat: “Non-hereditary DCM is a complex medical condition that may be affected by the interplay of multiple factors, such as genetics, underlying medical conditions, and diet.” So, kind of like their “get out of jail free card.” Grain-free diets were the initial scapegoat, but when it turned out to be much more complicated than that, the FDA had to backtrack.

Fast forward to today, here’s what the FDA states about DCM on its website now: “While adverse event numbers can be a potential signal of an issue with an FDA regulated product, by themselves, they do not supply sufficient data to establish a causal relationship with reported product(s).” I think that speaks for itself.

Stephanie Clark: And with that being said, maybe it is a perfect storm. We do know it's a multifactorial, a multiple factor disease, and we do know how we live affects us. It can even override our genes. And so, the environments, the stress, other conditions that dogs may get can override their genes. And they can unfortunately get heart disease.

Jordan Tyler: Okay, let's bring it back to the study design aspect. And I just kind of want to pick your brain. Stephanie slash Dr. Clark, put your scientist hat back on. Because I know you, I know you do a ton of research. And so, I just kind of wanted to hear from you. What constitutes a well-designed study?

Is there like a template or something that we should all be following? I'm sure that's not the case. I'm sure it depends. But what all goes into study design? And how can we look to study design to evaluate the strength of a particular piece of research?

Stephanie Clark: Yeah, so, in grad school, I even took a research class, and there's a book on depending on what research style type you want to do, how you have to word your questions, or collect your data, and it literally was the hardest class I had ever taken, but when you're actually doing the research, it doesn't feel that hard.

So breaking it down, I guess, could be difficult, but I will try and do the best for you lovely people. So first you need to think about what factors do you want to target? So what parameters can help give you useful information? So for example, if you're looking at heart disease, I don't really need to test your vision or hearing.

It doesn't mean I don't care. It just isn't going to give me useful data on the heart function. And all these parameters add costs and resources. So really figuring out what parameters are going to be collected. And then next, at what frequency? If we test too often, we could waste resources. If we test too late, we may miss that transition.

And so fully understanding, is this a parameter that is going to have a quick change? Or is it a slow, over time, gradual change? And with that, then you need to look at the duration of the study. Is my study long enough? And this is where the cost of the study definitely starts to add in. So, a lot of times we do three weeks or 20 days or even 28 days.

But going back to DCM, there are a lot of 28 day studies out there. Is that enough time? What we observed in our study was there's changes happening in the body and it didn't stabilize until day 90. And so too short of a timeline could shortchange our understanding of a factor's full impact.

And so we have to consider that when summarizing the results, you know, really understanding how long the study needs to be, how many parameters and at what frequency we should be collecting all need to be figured out because it weighs into the total cost of research. And no one wants to be running a 20-year study with 5,000 dogs.

I mean, we would all be in debt forever. One million, billion, bazillion dollars ($1,000,000,000,000,000,000,000,000,000,000,000 dollars).

Mike Meyers as Dr. Evil in the Austin Powers film series: One million dollars.

Stephanie Clark: So, how do you know how many animals is enough? Right? Ten seems like a decent number. Why not go with 8? Why not go with 12? And that's where people need to perform a power analysis, which is just a fancy word for how many dogs, how many cats, how many animals do I need?

And without conducting that prior to starting the design of your study, you could have too little of dogs where you don't see a change. And I'm picking on dogs right now, because we keep referencing a bunch of dog studies, but maybe too few cats, or if you select too big of a number, it can wash out your stats and you could miss something.

So again, I took this class, the class was impossible, but really when it comes down to it, every factor needs to be controlled. Even where the dogs are housed, who they're housed next to, there's a thing called stratifying. So have we separated our groups to be equal groups? Do we have the same amount of females and males in our control group and our treatment group?

Do we have the same age profile? Is their heart at the beginning of the study relatively equal? If we have a group of dogs that kind of have some heart issues, they're not the best hearts, and we all put them in one group, how can we compare equals? So really doing as much evaluation and making these two groups or four groups, or however many groups the resources can afford as equal as possible.

So long story short; again, researchers, we're control freaks. We love it, but we have to, because if we don't, we could miss something, or we could be completely rejected from a journal, and all of our hard work is now in a drawer.

Jordan Tyler: Yeah. I was going to say. I can't imagine going through all of that, all of what you just described.

And then you get to the end and you're, like you said, with bated breath, just to get a rejection. I'm curious, how long does this process usually take? Like how long does it take to design a study? Obviously, the length of the study depends on what you're studying, and that's a key factor that you need to consider.

But I'm curious how much, like How much pre work goes in, you know, before you even start?

Stephanie Clark: Depending on how rigorous your study is, depending on external factors that can really affect the timeline. I'll use our DCM study. For example, we did a lit review, which took a couple of months, but we really needed to figure out what has been tested in the past.

What hasn't, where our gaps were, what parameters we really wanted to focus on and which ones were maybe not the most ideal. So we did a lit review. And then we had to find a place because we needed a place that we could control the temperature, the environmental factors, when the dogs ate, how much they ate, how much was left on our weigh backs.

So how much did they not eat? And then COVID hit. That added a little bit to the timeline. But in the midst of all of that, we were able to formulate our diets, but then we had to test all our ingredients, and that takes a couple of weeks before we could get our nutritional values for our ingredients to make the diet.

retty sure we started this in:

Jordan Tyler: Wow.

Stephanie Clark: And that's not even including the summer we spent adopting all the dogs out.

Jordan Tyler: So yes, that episode, we will link to that one as well. That's a feel good one. So definitely take a listen.

Stephanie Clark: Yes. There is life after research for humans and dogs.

Jordan Tyler: Stephanie still here. She's still kicking. Even after. I mean, that is just, that timeline is-so much work goes into it. So, I have a newfound appreciation.

I knew research was hard, but dang, I didn't know that you have to keep that level of rigor for months and months and months.

Stephanie Clark: Yeah, I once got asked, was it challenging? Is it challenging or is it impossible? That question has filtered through my brain of anything that I do. For better or for worse, because sometimes that just makes me a little headstrong of like, it's just difficult.

We can totally do this. And it's like, well, maybe not. Maybe it actually is impossible. But that's the questions we had to ask ourselves during a research study. During COVID, you couldn't even get into the facility. I had to go through so many hoops just to get into the facility to see the dogs.

And take care of them. And I think it was really difficult, but it wasn't impossible. We could set a safety protocol, how to get a person into a research facility. But anyway, I mean, that's why hashtag nothing is impossible is it's going to happen are going to make fetch happen.

Lacey Chabert as Gretchen Weiners in Mean Girls: That is so fetch!

Rachel McAdams as Regina George in Mean Girls: Gretchen, stop trying to make fetch happen. It’s not going to happen.

Jordan Tyler: So first of all, Stephanie/Dr. Clark. Thank you for that detailed explanation. I think you did a fabulous job, even though you said that the class that you had to take was really hard.

Stephanie Clark: I think I got an okay grade, but it was like the worst semester.

Jordan Tyler: Baby's first B?

Stephanie Clark: Yeah, yeah, I was that student. I'm okay.

Jordan Tyler: I was, too. I was, too. That's why I was like, this is probably a safe space for me to make that joke.

Stephanie Clark: Oh, absolutely.

Jordan Tyler: I know you mentioned briefly an example of a study that used maybe too small of a sample size and maybe didn't run for a length of time. We talked a little bit about that study in an episode that we did with Tim McGreevey, with the American Pulse Association, where basically this study was kind of extrapolated to the DCM issue and used as an example.

But I wanted to talk to you a little bit about maybe why that was not a very good example, because the research initially wasn't even about DCM. It wasn't even looking into that issue. So could you just share a little bit more about that example and maybe other examples of where research maybe fell flat?

Stephanie Clark: Yeah, absolutely. And that's where research design comes in. So the Saskatchewan study was a two part study. It was a seven-day study and then they continued on to a 28-day study. And when you look at that timeline, you're like, okay, 28 days seems like a relatively long time. That's a month, roughly-In February; we should be able to, to figure out what's going on.

But maybe not essentially with hearts. Maybe that takes longer. And so, because they designed their study originally for glycemic index, and that happens really quickly, I mean, we eat something, our blood sugar goes up, it goes down, that's almost instant. You know, you can watch it over time a little bit, but you're not going to see much difference from day 28 to day 38.

So it makes sense, you know, and they used eight dogs, which possibly in glycemic index, that makes sense. I honestly don't know if that's an appropriate number. A lot of people on social media said that this was the nail in the coffin, that this study, and I'm paraphrasing, but that this study basically said, yeah, this is what's happening.

And in all reality, it shows some stuff. But it's not long enough to really determine what's going on, and it's not enough dogs to really show a difference. And so that's why you can't just really pivot mid-study and say, “Ah, I'm going to study something else.” Like if you're collecting all these heart parameters, and you're like, I'm going to look at anxiety, or I'm going to look at cognitive function.

There would be much different parameters that you look at than what you would look at for evaluating the heart. So, all in all, they did their study. They've got it published. They got grant funding for it. But at the end of the day, it may not have been the most rigorous, it may not have been the most controlled, and dare I say it's not a nail in the coffin.

By any means, maybe they, like, picked a nail, but it definitely didn't go in a coffin.

So, then, there's other studies that seem very robust, but really understanding what parameters. So, we looked at, for example, amino acids, amino acids in the blood versus whole blood versus muscle tissue versus what's going on in the heart. Because we, in the past, we've always just drawn blood because it's what we have access to and then we try to assume that that's what's going on in the heart.

But do they even correlate? Do they even match? So if it's low in the blood does that mean it's low in the heart? I don't know. We never looked at it. So that was one thing that the lit review ferreted out was we need to figure out these studies that are collecting whole blood or plasma amino acids.

Can you use those interchangeably? Does it matter? And we actually found out that it does matter what's going on in plasma. The liquid portion of your blood actually doesn't match what's going on in your whole blood, which is, it seems crazy, but it doesn't. And what's going on in your heart doesn't match what's going on in your blood, or your plasma, and your heart is a muscle, but doesn't even match what's going on in your skeletal muscle, like your thigh. So, your body is like working in all these different isolated processes, and they have all these different parameters for an amino acid.

Jordan Tyler: That’s really interesting actually. And super important to highlight because there have been a number of studies on DCM that have looked at whole blood or plasma to measure heart health. And it sounds like this newer research, which we’ll link in the show notes for this episode, requires us to go back and reassess the value of studies using these parameters to determine outcomes around DCM, because they could actually be misleading to us and to future research.

Stephanie Clark: And then you get even to a step further, is the dog fed? Is the dog fasted? Because obviously, if the dog is fed, you're introducing amino acids in your food from protein that's being broken down into the body now.

Are you accounting for baseline? Plus, what they ate. And that's an oversimplification, but there was a past study, I believe this was Ontiveros' study that used these measures interchangeably. And that's what they had, right? Because it was a retrospective study.

So, they were looking at what people had collected, what they had at the time. Which is great. Again, it's a basis, but because it was retrospective, they weren't able to control any of that. And so, at the end of the day, now that we know whole blood and plasma do not correlate, should we look at that paper a little differently?

Should we isolate the dogs that only had whole blood to the dogs that only had whole blood collected versus plasma versus those who had a diet versus those who were fasted? I mean, based on the science that we've saw, yeah. But that stirs the pot a little bit.

Jordan Tyler: Yeah, stir that pot, girl! Because it brings up a really great point – if amino acids in whole blood and plasma aren’t good indicators of what’s going on in the heart, why would we keep basing future research on inaccurate methods? That’s like continuing to believe the Earth is the center of the universe and basing our understanding of other things on that belief, even after it’s been scientifically proven wrong. Just seems a little silly to me!

I also think it's important to consider when you're conducting a study and there is a diet involved. So, the DCM studies that BSM Partners conducted, really diet centric, right? Because we're trying to figure out if diet is a root cause of developing DCM. And not all research will have anything to do with diet. It will probably be a factor in a lot of different things, but maybe not such a central piece.

And because it was such a central piece to this DCM research, you didn't just go out and pull pet food off of a shelf and say, yeah, this is what we're going to feed. We're going to feed these readily available diets, even though we don't know exactly the inclusion rates of the ingredients, what concentrations, the efficacy, and all those things that will matter and potentially skew the results of the study.

And so, that's why we went out of our way to formulate our own diet so that we can control every single little detail, really put that control freak mindset to good use.

Stephanie Clark: Yeah, and I mean, for the average consumer, and honestly, before I started formulating diets, You could look at two bags and you're like, yeah, these look pretty similar.

The ingredient decks kind of read similar. Oh, they've got peas at number three. This one has peas at number two. But honestly, when it comes down to it, you have no idea the percentage. And so, when we start saying pulse rich, what does that even mean? Because we don't; my dog is getting concerned.

We don't know what the inclusion rate is unless you made the diet or unless you can reverse engineer it. And so, you may be saying this pulse rich diet, and you could be comparing a diet with 40% peas to a diet with 2% peas and the difference is one is the second and one is the third ingredient.

So, you really have to be careful; and then you're banking on someone else doing the appropriate work. And I'm not saying that people don't do good work, but in research, you really need to control it. And if you're going to put your money behind it, you better guarantee that what you're feeding is what you're feeding.

And at the end of the day, we weren't really looking at ingredients. We were looking at nutrients, does the body truly care where taurine comes from as long as it gets it? Does the body truly care where fat comes from, if we get it? And so, are we focusing on demonizing ingredients? Or are we focusing on what nutrients are required to maintain health?

Jordan Tyler: So, speaking of all these limitations and kind of bringing all this home, you played a really extensive part in this particular piece of research that BSM Partners conducted into DCM. And maybe there were some limitations in our own study that we can address in the future or share out so that other people conducting research are aware of maybe this limitation. So, would you mind sharing, were there any limitations in the BSM Partners research and if so, what were they?

Stephanie Clark: All good research, really, we have to humble ourselves. We really should, at the end of the paper, list out limitations. Not only for those who are reading the paper to understand where things may have fallen short, but also so those who take that paper and want to build on it know the areas where they can build on.

And so limitations can be everything from the length of the study. How do we know it was long enough? All we can do is the best within our given resources. You know, we conducted our study for as long as we possibly could. The breeds that were selected could be considered a limitation, too, because some believe that beagles don't develop DCM.

However, a year ago, our beagle, who at the time was eating a grain-inclusive diet, passed away from DCM. So, no dog breed is immune to heart issues. But beagles are what is typically available when looking at controlled research. And that is what we used along with our mixed breed hounds. These are limitations that we had in our study, and it's important that we put them out there because hopefully the next people who want to expand upon this research, who want to dive in deeper into DCM or hard health.

Or how diet or other factors could affect the overall health of dogs, can see, okay, maybe we should try a different breed, or maybe we should try an isolate size. BSM looked at small and medium sized dogs. What about large and giant-sized dogs? So, it doesn't mean it's a bad thing to have limitations.

I think if we all came out with the perfect research study with no limitations, I would want to be in that group because they're clearly bringing in lots of grants and they have dogs for days and the time and the people, but really it just doesn't exist. And so, in research we do our best and we list out our limitations at the end of the day.

appen across the industry, in:

Well, Dr. Clark/Dr. Stephanie/Steph, I could probably pick your brain all day long, but I think we've hit all the high points here. So, as we close out today's episode, let's review a little bit. Don't worry, there won't be a quiz, but if you do want some homework, we have linked a ton of examples of both trustworthy and less than trustworthy research in the show notes for this episode.

And that includes some of the pieces of research that Dr. Clark elaborated on during the episode. As you walk away from this conversation, remember that the strength of any research really lies in its design. Whether you're a researcher conducting a study of your own, a reviewer evaluating somebody else's work, or a pet owner looking to sharpen your eye for good research. It's crucial to dig deeper, look for things below the surface level and challenge yourself to consider every aspect from potential biases to methodology and sample size as well as possible limitations. Take the insights from today's discussion and apply them as you engage with research in the future. Don't just accept information at face value. Critically examine each study, question its design, and consider the broader context. Use some of the examples we've explored here today to sharpen your own analytical skills.

The more you challenge assumptions and dig into the details, The more empowered you'll be to separate solid, reliable research from flawed data. We hope ultimately with these tools, you'll be empowered to make more informed decisions about the brands you buy and the companies you trust. Just keep questioning, keep learning and trust in your own ability to drive change through knowledge.

Stephanie Clark: Thank you for tuning in to another episode of Barking Mad. If you want to learn more about BSM partners, please visit us at www.bsmpartners.net. Don't forget to subscribe on your favorite leading podcast platform or share it with a friend to stay current on the latest pet industry trends and conversations.

We'd also like to thank our dedicated team, Ada-Miette Thomas, Neeley Bowden, Paige Lanier, Kait Wright, and Dr. Katy Miller. And an extra thank you to Lee Ann Hagerty and Michael Johnson in support of this episode.

Jordan Tyler: See you next time!

Links

Chapters

Video

More from YouTube