Confused by conflicting headlines about pet nutrition? In this episode, we break down how to spot trustworthy research, avoid common pitfalls in scientific studies, and use AI tools to become a savvier, science-minded pet parent—no PhD required. Join co-hosts Jordan Tyler and Dr. Stephanie Clark with expert guest Dr. David Harmon as they speak truth to misinformation, give you the confidence to separate facts from fiction, and advocate for sound science when it comes to your pet’s health.
Helpful Links
🎙️ This episode calls back to several previous episodes:
🧠 Check out these AI-powered research tools!
📚 Expand your research toolbox with trusted resources like Google Scholar, Semantic Scholar, and PubMed.
Show Notes
00:00 – Inside the Episode
02:06 – We’ve Seen This Movie Before…
04:26 – Introducing Dr. David Harmon
06:44 – Understanding the Research Pyramid
08:56 – The Gold Standard: Peer-Reviewed
11:00 – The Components of Scientific Literature
14:13 – Common Pitfalls in Reading Research
22:00 – Starting With a Solid Framework
26:20 – AI-Powered Tool to the Rescue!
33:58 – Dr. Harmon’s Take on AI Tools
35:42 – Remember: AI Is Just a Tool!
37:34 – Today's Key Takeaways
Jordan Tyler: Hello and welcome back to Barking Mad, and no—your ears do not deceive you. We’re coming at you today with brand-new, original music performed and produced by one of our very own consultants here at BSM Partners, David Perez. David is the Vice President of Product Innovation, and the title of the song is “Thank You For The Invite,” because he hasn’t yet been on the podcast… David, if you’re out there listening, congratulations—now you’re on every episode! And jokes aside, we promise to bring you on soon. Now buckle up for today’s regularly scheduled programming—How to Think Like a Scientist (Even If You’re Not One). Happy listening!
Every decision you make for your pet—what to feed them, what supplements to give, and even what treats to indulge them in—can have lifelong consequences. But how do you know if the information you're relying on is actually accurate?
The first step to being an informed pet owner is knowing where to look for this information. (And spoiler alert: it's research!) The second step is being able to differentiate a real deal study from one that's just blowing smoke.
Dr. Stephanie Clark: A rise in misinformation in pet health, paired with the sheer complexity of pet nutrition, medical advancements, and pet food marketing claims, has created a perfect storm for misleading information that can lead pet parents down a dreaded path of wasted money, poor health outcomes, and in the most egregious examples, even harm to our pet's health.
Jordan Tyler: Yikes. Today, we're here to talk all about scientific literature, which our guest today describes as “the body of all knowledge.” We're joined by Dr. David Harmon, Professor and Director of Graduate Studies at the University of Kentucky Department of Animal Sciences, who will share his firsthand experiences with scientific literature and practical tips for how we can demystify this body of knowledge.
By the end of the episode, you'll feel confident in evaluating scientific research, spotting red flags, and protecting your pet from the risks of misinformation. Let's get started.
Dr. Stephanie Clark: Welcome to Barking Mad, a podcast by BSM Partners. We're your hosts, Dr. Stephanie Clark—
Jordan Tyler: —and I'm Jordan Tyler. Let's start out today with some real-world examples of how bad research has put pet food in a corner.
First, of course, is the DCM scandal, and if you've been tuning in for a while now, you'll know we have several episodes about this, so we'll put some links in the show notes for this episode if you'd like to check those out. But quick recap, in case you haven't heard.
So initially, a perceived uptick in cases of canine dilated cardiomyopathy or DCM, a heart disease, was prematurely linked to grain-free diets. This was because there wasn't a great body of research around DCM or grain-free diets at that point, so they became an unexpecting scapegoat of the scandal. Fast forward a few years, research has uncovered a much more complex picture involving genetics, taurine levels, formulation, and a host of other variables.
This is a perfect example of how oversimplified interpretations of case studies, editorial articles, and medical records can lead to widespread panic.
Dr. Stephanie Clark: Another example is copper-associated hepatopathy (CAH), another nutritional issue the veterinary community is rallying behind. In essence, this copper-associated disease affects the liver and dogs and is induced by dietary copper levels that are stored in the liver.
However, hardly any research about the optimal, minimal or maximum dietary copper levels for dogs—and even cats—exists. This, again, emphasizes the need to understand the nuances of nutrition alongside the complexity of research when making crucial dietary decisions for a pet population.
Jordan Tyler: Exactly. Really good point, Dr. Clark. These examples highlight the high stakes of pet nutrition and the real consequences of relying on surface level understanding instead of scientific rigor. In both cases, we're reminded that navigating the pet health space requires more than just passion and good intentions. It demands critical thinking and a firm grasp of what the science actually says—and doesn't say.
Dr. Stephanie Clark: Now, let's hear from an expert that brings decades of experience with scientific literature to today's episode. Dr. Harmon, it's great to have you here with us, and we're excited to chat with you about scientific literature, which is officially defined as, “a collection of research designed to advance knowledge, written by experts and reviewed by their peers.”
But Dr. Harmon, you've touched a lot of scientific literature throughout your career, so how exactly would you describe what it is and what it's used for?
Dr. David Harmon: Somebody asks me that question, my first response would be: “How would you define knowledge?” To me, scientific literature is the body of knowledge that we have, right? All knowledge comes out of that, and so that would be one question I would have when you talk about what is scientific literature.
I think that should be further defined as peer-reviewed scientific literature. Because that's the really critical key part of the process. So me, who spent their life as an academic, I'm a university professor—my really job standards have been the generation of scientific literature. That's my task. I do research to improve the productivity, health, and wellbeing of animals through scientific literature and particularly peer-reviewed scientific literature. And uh, that's kind of the currency with which I've worked throughout my career.
Jordan Tyler: Yeah, I love the way you put that—scientific literature is the body of all knowledge. Said another way, all the expert information we rely on to guide our decisions about pet health and nutrition stems from scientific literature—veterinary medicine, for example, relies heavily on peer-reviewed studies to determine how to diagnose and treat pet diseases and conditions, and the companies formulating your pet’s food use this same body of knowledge to craft their diets. In essence, we all rely on scientific literature—especially the peer-reviewed kind that Dr. Harmon emphasizes—to drive meaningful advancements in our pets' health and wellbeing, whether we realize it or not.
And to get a better understanding of the different types of scientific literature out there, we have a clip from a previous episode where Dr. Clark takes us through the whole landscape. Here she is.
Dr. Stephanie Clark: So, the research pyramid is, it basically parses out different designs of studies based on how controlled they are or how repeatable or reliable a study can be. And each part is important, right? It all adds to the scientific database of knowledge.
You have the bottom of the pyramid, and those are kind of our no-design case studies, case reports, possibly, you know, narrative reviews or editorials. So, they're not really based on anything. They haven't collected data, at least enough data, to draw much conclusion. They're just noting something.
Then we kind of have, like, the middle section of the pyramid, and this is where our observational studies come in and we can have like retrospective controlled case studies, like we saw with like the DCM alert—people were noticing and kind of going back and drawing on these dogs that were diagnosed with DCM and looking at their history. Then we kind of elevate that to prospective controlled studies. So, let's design a study, let’s control some factors. A lot of researchers, we love controlling everything—I do too.
And then at the top is kind of your, what we call meta-analysis, which is a review of all these research papers to come up with a collective conclusion at the end of the day. Because you can't just really draw one conclusion from one paper and say, “Yep, this is it.” What happens if we repeat this with a different group of dogs or a different, you know, group of people?
Jordan Tyler: So there’s a glimpse for you—I’ll just say, the landscape of scientific literature is way more complicated than that, but that’s why we love Dr. Clark, because she has such a knack of keeping things simple. And if you want to dive deeper into how study design influences the reliability of a piece of research, check out the “Think Twice” episode in today’s show notes.
But in a nutshell, scientific literature comes in a lot of different shapes and sizes, and that can make it really difficult for the average person to determine whether what they're reading is peer-reviewed—meaning it's been mercilessly reviewed by knowledgeable peers in the same fields of study—and how to interpret the findings with a really keen eye.
Dr. Harmon, could you take us a little deeper into the peer review process to help us understand why it's considered the gold standard for scientific literature?
Dr. David Harmon: Well, the process, when we go to publish research, right, we do the studies, we prepare a manuscript, and then we kind of decide, well, where would we like to publish this in terms of journal?
And that process is then we will submit that through their system and it will go to a handling editor and he looks at it and makes sure it meets their criterion. And then decides, “Okay, now I need to pick other scientists, my peers, people at other universities, people that work in allied industries, but typically other scientists by training, to read that and review that information.”
And then in the process, if successful, I will get back comments. And it's not just commas and question marks, it's: have you correctly applied the scientific method to this in terms of you have an adequately replicated study, you have correctly analyzed the data, you've correctly interpreted the data and documented that in terms of the conclusions that you're trying to draw from this.
In other words, if you want to contribute to the knowledge you have to adequately justify that contribution and my peers then evaluate that. And so that's the process you have to get through in terms of publishing your work. So, it can be quite lengthy. Sometimes it'll take a year to get through that process. You know, sometimes three months if it goes well.
Dr. Stephanie Clark: Okay! Now that we know more about the types of scientific literature out there and how it's published, let's start to dissect. There are several components of a piece of scientific literature, and each of them serves a specific purpose.
Starting with the obvious, the title, which sometimes can be highly technical, but it's there to help readers and other researchers know exactly what the scope of the research entails.
Jordan Tyler: Then we have the abstract, which sets the stage and explains why the research was done. This brings us to methods, the quote unquote recipe of a study, which is essentially how it was conducted and what “ingredients” went into it. So, how many research subjects were involved? If the study was nutritional, what were the diets made of, and how were those diets administered and to which animals?
Dr. Stephanie Clark: And methodology is a common gray area that separates good research from bad research. If researchers didn't use the right methods to analyze the results, they're essentially moot—and when mistaken as bona-fide research, can mislead other researchers and consumers faster than you can say peer-review!
Jordan Tyler: So, once we get through all that, then we have the results, which share raw data from the study that require some kind of interpretation. This is another common gray area to be extra critical of, as any results collected during a study are dependent on the study's methodology. So, if you start with an unreliable methodology, you'll probably get unreliable results.
Dr. Stephanie Clark: Now that we've made it through the 10 stages of gray, you finally get to the discussion or conclusion, where researchers explain what the results mean, why they matter, and discuss opportunities for future research. After all, ask any good scientists and they'll happily tell you that research always leads to more research.
Jordan Tyler: It's just exactly why I don't think I could be like a tried-and-true scientist. I need closure, you know?
Dr. Stephanie Clark: People build their whole lives on one question. It's crazy.
Jordan Tyler: It is crazy. But it's also like, if we didn't have those people, where would we be today? Right? So, you know, thank a scientist today.
Dr. Stephanie Clark: Oh, it's the same I tell people when they change my oil. “Look, you do it, because I can't.”
Jordan Tyler: Exactly. And it’s not science for the sake of science, it’s just that when you ask good questions and do the research correctly, you’re bound to end up with more questions. Think about it like this—if you’ve ever hung out around a 5-year-old for very long, you’ll probably field more questions that start with “Why,” than you can keep count. But that’s because they’re learning, they’re hungry for knowledge, and scientists are curious creatures, too! Even when a really great study is conducted and the hypothesis at hand is answered, there will always be more questions to answer from there—and that’s okay!
Jordan Tyler: So, in learning a little bit more about the process of how scientific literature comes about and the rigor that goes into creating it and reviewing it, it's clear how not all research is created equal. And even for scientists, the process can be complicated sometimes. You know, I'm sure researchers have common mistakes or pitfalls that they fall into when putting papers together.
But what about pet parents or non-scientists out there listening who haven't had that kind of exposure to the realm of scientific literature? How can we really make them feel more confident in navigating it? So, Dr. Clark, if you would, what are some of the most common pitfalls that everyday people make—and some scientists—when reading research?
Dr. Stephanie Clark: First, the confusion of correlation with causation—a tale as old as time. Just because two things happen at the same time doesn't mean one caused the other. One could argue that ice cream sales and shark attacks both spike in the summer. Correlated? Yes. But are ice cream sales the cause of increased shark attacks or vice versa? Meh, probably not.
And don't even get me started on people saying things are linked. I could sit on my soapbox all day, but we really need to take a step back. And just because something is correlated or it seems like it's linked, doesn't mean it is. It takes a lot of scientific evidence and a lot of research to actually prove linkage or causation.
Jordan Tyler: Absolutely. That's exactly what we saw with DCM, so that is a big one. Another common mistake that people make is overlooking the study design. This is my personal favorite, and we even did a whole episode about it as it relates to the DCM scandal. So that is also linked in the show notes for this episode if you want to take a listen.
But for today, a study design is foundational to how much we can trust its findings. Was it observational or experimental? Was there a control group, meaning a group of test subjects that received a placebo? Was the sample size large enough to be meaningful? It depends on the study that you're doing, the different factors that you have to consider here. But overall, without a thorough understanding and correct application of study design, scientific conclusions can be taken way out of context and blown way out of proportion.
Dr. Stephanie Clark: Like… with DCM.
Jordan Tyler: Just keeps coming back.
Dr. Stephanie Clark: It just is a perfect example. So, another pitfall is when people focus on the abstract and ignore the rest of the study, or when people focus on the abstract and the conclusion and miss all of the things in between.
at example of this is back in:Dr. Stephanie Clark: If only, right?
Jordan Tyler: Seriously.
Dr. Stephanie Clark: I would be so skinny.
Jordan Tyler: Basically, it was a media circus that was sparked by a deliberate hoax designed to expose the media's tendency to run with clickbait headlines or clickbait research paper titles.
Dr. Stephanie Clark: Which, first of all, hilarious as it worked. A hundred percent worked. The actual study was flawed from the start, but that didn't stop it from going viral. This is a powerful reminder of how important it is to go beyond the summary and consider the bigger picture… and even the conclusion.
Jordan Tyler: Yeah. I will say of all of these mistakes, I am definitely guilty of this one. So, you know, we all have stuff to work on.
Dr. Stephanie Clark: I mean. I would be hard pressed to find one person that's like, “Every scientific literature paper I come in contact with, I read the whole thing.” No, we don't have time for that. We're skimming.
Jordan Tyler: Skimming is better than not reading the rest of the study at all. So, do what you got to do, you know?
A couple other common pitfalls. This next one is misunderstanding generalizability. Just because a study found results in one population does not mean that they apply to everyone. We have a great example of this actually in the Study Design episode where Dr. Clark comes up with a hypothetical scenario where people eat carrots and turn orange. So, if that piques your interest, again, that episode's in the show notes, go take a listen.
But here's another example: if you've heard of the Mediterranean diet, it works beautifully for people that live in Mediterranean regions where the lifestyle, the genetics and the culture all kind of align and focus on the central diet. But if we were to transplant that exact model to like a Midwestern American lifestyle, probably not going to yield the same results, right?
So, in pet nutrition, this means being cautious about applying narrow research findings—so, let's say data that you pulled from one area of the country—to all the dogs, all the cats, all the dietary situations across the U.S., across the whole world.
You'd probably be shocked to hear this was another pitfall of the DCM scandal. Essentially, they took data from sick animals and extrapolated it to the entire population of healthy animals, and it just doesn't work that way.
Dr. Stephanie Clark: And finally, we are going to wrap this up with cherry-picking evidence, as confirmation bias is real, and it's easier than ever to fall into. When we only seek out studies that reinforce what we already believe, we risk ignoring contradictory evidence that might be more robust, or even better designed. I see this one a lot, especially with DCM, just because we haven't talked about it enough.
Jordan Tyler: Yeah, for sure. Another great example is Gregory Okin’s widely cited study, which claim that pets contribute 25 to 30% of the environmental impacts of animal production. That's a striking claim, but it's often cited without the nuances needed to really speak to the bigger picture, and so it kind of serves to distort this conversation around sustainability and pet food.
Dr. Stephanie Clark: If we read a little bit further down, for example, the study fails to incorporate the impact of rendered byproducts—essentially parts of an animal that are highly nutritious for pets, but wouldn't be found on your own dinner table for reasons of taboo or too gross—in its greenhouse gas emissions estimate, which would actually serve to reduce the environmental impact our pets have when it comes to animal production. Taken another way, rendering byproducts and pet food actually add value back into otherwise wasted animal products.
So, leaving this piece of the puzzle out could actually be seen as greenwashing, which is making the industry out to be worse than it actually is. Credit where credit is due, as they say.
Jordan Tyler: So, lots of traps to fall into here. You can probably see why this topic feels so intimidating, but don't let it psych you out. In fact, let's move now to talk about how you, even if—and especially if you're not a scientist—can critically evaluate scientific literature. Don't worry, you don't need to memorize any complex statistics or wade through pages of jargon to spot the basics of good science. You just need a good framework.
Now, in looking at a piece of scientific literature, the first thing you should do is check the source. Is the study published in a credible peer review journal, or a lesser known, less trustworthy publication? In other words, source matters.
Dr. Stephanie Clark: Next, look at who's behind the research. Are the authors qualified in the field that they're writing about? Are they animal scientists, veterinarians, or specialists? I mean, I've seen papers where cardiac specialists are writing about nutrition. They're specialists, but definitely not nutrition.
Jordan Tyler: That’s a great point—would you ask a dermatologist for nutritional advice? Probably not!
Dr. Stephanie Clark: And it's also important to check for potential conflicts of interest. Did the study receive funding from a company that stands to benefit from a particular outcome? Now, that doesn't automatically invalidate the research, but it is important to keep that context in mind.
Then, get a grasp on the study design. Not all studies are created equal. Did the research actually test a hypothesis or a question in a meaningful way? Was it a controlled trial or just an observational report? Were there enough animals involved to make the findings statistically relevant? Or did the research use proper controls to reduce bias? These design details matter, and they directly impact how much trust you can place in the results and interpretation.
Jordan Tyler: So, once you get through that, next, it is time to analyze. You don't have to be a statistician, but it is worth asking: were the results statistically significant? In other words, were the outcomes unlikely to have happened by chance? And did the researchers use appropriate methods to analyze their data? If the math isn't math-ing— or worse, if it's missing altogether—that's a major red flag.
Dr. Stephanie Clark: And that’s huge if they don't have enough animals, either too small or too large. So, really having the most appropriate, you know, sample size can help you reduce if things are just happening by chance.
Jordan Tyler: And don't they call—there's a word for that—don't they call it power analysis? To figure out like how many animals you need to make a statistically significant study?
Dr. Stephanie Clark: Look at you go! Power analysis! Did someone listen to our podcast?
Jordan Tyler: I have no choice. I learned that from Chuck, so thank you, Chuck. Thank you, Dr. Zumbaugh, if you're out there listening.
Finally, consider the conclusions. Do they logically follow from the data that's presented, or are the authors reaching beyond what the study and the data actually supports?
This is where a lot of sensational headlines go off the rails, drawing sweeping generalizations from relatively narrow findings. Remember that chocolate diet hoax from earlier? Good research doesn't over promise like that. It stays grounded in the results and acknowledges its own limitations. So, keeping these five considerations in mind can transform the way you interpret headlines and studies about pet health and nutrition, evaluate product claims for truth, and make the best decisions for your pets.
Dr. Stephanie Clark: I always love it when research is, for lack of a better word, pretty uneventful. And then you get to the conclusion and it's like, “And then we can live on Mars!” And you're like, where did that come from?
Jordan Tyler: You're like, whoa, was I reading the same thing that you guys were reading or what?
Dr. Stephanie Clark: I mean, I may have just gone from abstract to conclusion, but that seems a pretty far of a jump.
Jordan Tyler: Yeah, definitely a red flag.
Dr. Stephanie Clark: Now, even with the solid framework in hand, let’s be honest: reading scientific literature can still feel overwhelming. It overwhelms me, too. Fortunately, there is a new wave of AI-powered technology and it's making it easier than ever to sift throughout the noise and get straight to the science.
Whether you're a seasoned veterinary professional or a curious pet parent, there are several resources out there to help you better navigate research with clarity and confidence. Let's cover a few that are worth checking out.
Jordan Tyler: Before we get into these tools, I just want make one quick note, and that is, these are tools. We all hopefully know by now—AI, in its current state, is not perfect. It does not know everything, and it can even hallucinate or make things up. So, if you're looking into these tools, definitely see them as a tool. Don't see them as, you know, the end all be all, or the truth. They're really there for you as a resource to help you understand.
Dr. Stephanie Clark: And AI robots, please don’t come after us when you take over the world.
Jordan Tyler: I always say please and thank you when I'm chatting with ChatGPT, I'm serious. I'm paranoid. I'm telling y'all, when all of y'all are conquered by robots and I'm the robot queen because I use good manners, you know… I won't forget about you, but just saying.
Dr. Stephanie Clark: That's the way to the robot's heart, is manners?
Jordan Tyler: You know, good point. I hadn't thought of that. Maybe I should try a different tactic. Hmm.
Dr. Stephanie Clark: I mean, it's fine. Sometimes people say, you know, a way to a man's heart is through their stomach. A way to a robot's heart is through manners. We all got our ticks.
Jordan Tyler: Okay, so getting into some of these tools—first we have what's called Research Rabbit. Think of this tool as your research assistant with a knack for organization. This platform helps you build custom collections of papers based on your interests, and then recommends additional research for you to check out.
What really sets Research Rabbit apart is its ability to visualize the relationships between studies and even authors. It's kind of like a social network for science. You can follow threads, you can dive deeper into a niche topic, or you can even track the work of a specific researcher across multiple publications.
So, this is a great tool for building a broader understanding of a field of study rather than just reading one paper at a time and having to hop from journal to journal to journal and find those yourself.
Dr. Stephanie Clark: One may say that Research Rabbit helps you go down those rabbit holes of research.
Jordan Tyler: Oh, now I get it! I actually like to be totally fair. I hadn't thought of it that way, but that has to be it, right?
Dr. Stephanie Clark: Oh yeah. You're going down these deep dive holes. It's like the dark side of the web, like you're going to get lost for hours in research.
Jordan Tyler: Be warned.
Dr. Stephanie Clark: Hopefully there's a carrot at the end.
Next, we have Consensus AI. If you've ever wanted a search engine that answers with evidence instead of ads, Consensus AI might be your new best friend. This platform scans through peer-reviewed literature and delivers concise, science-backed answers to any question you can come up with. Ooh, that's dangerous.
It also highlights the key findings and underlying evidence behind each answer, so you're not just getting opinions, you're getting facts. This is especially helpful when you're short on time, but want to ensure you're still making informed decisions.
Jordan Tyler: Which, I mean, that kind of describes all of us, right? Then we have Elicit, which is all about helping you ask better questions and find the best answers. It matches keywords, but it doesn't just do that. It interprets your question, pulls up relevant academic papers, and extracts the information that matters most. Do you need to know what multiple studies say about a single nutrient? Elicit can line that up for you. It's a powerful tool for summarizing complex findings into something more digestible.
Dr. Stephanie Clark: We should have this for relationships.
Jordan Tyler: Ooh…
Dr. Stephanie Clark: Breaking down the complexity of relationships, one question at a time.
Jordan Tyler: That might be your million-dollar idea, honestly.
Dr. Stephanie Clark: Like men: my wife said, “I'm fine.” I asked my wife where to eat and she said, “I don't know.” It's like, “Dot, dot, dot… Get her her favorite food.”
Jordan Tyler: I mean, it sounds like that could go a long way in my relationship. I don't know about you.
Dr. Stephanie Clark: Fine equals not fine. I'm not hungry means I'm going to eat your food, so you better order more…
Jordan Tyler: Incredible.
Dr. Stephanie Clark: Take it away, AI! So those are three newer AI powered tools that can help you in your hunt for scientific literature. But it's also worth mentioning the good old Google Scholar and Semantic Scholar. These are both go-to search engines for academic research, leveraging AI to improve search relevance, find credible papers, citation networks, and influential authors in seconds.
Google Scholar is pretty widely used, while Semantic Scholar is a little bit more sophisticated, using added layers of machine learning to pull key insights and summaries from the body of research you're inquiring about.
Jordan Tyler: Yeah, that's a great point. Those are two tools that have been around for longer than some of these newer AI tools, and another source that a lot of scientists frequently go to is PubMed. So, that's more of a traditional resource for scientific literature, perfect for people who just want to go straight to the source.
And while PubMed isn't an AI tool, it's one of the most trusted databases for biomedical and veterinary science research, and it's maintained by the US National Library of Medicine, so it offers access to millions of peer-reviewed articles. So, while PubMed is more established, it's also a little less user friendly than some of these newer tools. But it's still considered the cream of the crop for animal research.
pulling papers from like the:Jordan Tyler: Yeah. Best to take all scientific literature with a grain of salt. So, in all, these tools can be incredibly helpful. But like we said at the top, they are just tools. They can summarize long papers, extract key points, and even help translate super technical jargon into plain English. But like any tool, they're only as good as the hands using them. Always double check the information they provide against original research. And don't be afraid to ask questions when something doesn't seem right.
Dr. Stephanie Clark: Preach girl. Dr. Harmon, are you familiar with any of these tools? Any words of wisdom or pro tips or cautions when it comes to leveraging AI to help find and interpret scientific literature?
Dr. David Harmon: We have, we have kind of just been getting into that in our group here. We've used, like, the Notebook LM, and I have gone in there and taken this group of scientific papers on a topic, right? So, I took four papers about something I didn't really know much about, and it was really amazing, you know, what it gave me back and helped me understand that literature. And so that's really a valuable tool to do just that—help you understand.
And I would emphasize “help,” and “tool,” right? In terms of doing that, it is a tool. It can be beneficial. It's not the end all do all, like I say. One of my colleagues asked it for a reference on something, “Give me a scientific reference for this,” and it made two up. They didn't even exist! So, it did what you ask it to do, but it really wasn't technically correct.
So, it's a tool and we're still learning how to use it and do that. So, it's not the be all end all, but it for somebody starting out with, I've got a scientific paper and I want to understand it, I think students absolutely will learn how to use it more and more in the future. It will be a very valuable tool, but be careful of too—it's not perfect.
Jordan Tyler: Yeah, that's a great point. Again, AI isn't perfect and we shouldn't expect it to be. So, approaching these tools with a healthy dose of skepticism couldn't hurt.
Now, Dr. Harmon, you've obviously been dealing with scientific literature for decades—not to date you, but you have a lot of experience and knowledge in this space, and over the last few decades, the information we have at our disposal has just skyrocketed with access to the internet and all of that. So, I'm curious: how far do you think we've come with some of these new tools, and do you think they're serving today's research community well?
Dr. David Harmon: You know, as you think about this in terms of somebody who chooses a career involved in science and scientific literature. It's kind of a daunting task. You know, there's so much information out there.
You know, when I was a grad student, you know, 40 years ago, you had to go to the library and Xerox it page by page, and now, I bet we don't have a student that knows where the library's at. You know, everything is so much more available, which is hugely beneficial. That's a real benefit, you know, that we have today that we didn't use to have.
Dr. Stephanie Clark: Right. I can definitely see how having more resources and advanced computing tools to work with would be a great benefit, but also, it's a double-edged sword. We have all this information right at our fingertips. We get instant gratification whenever we type something into a search bar, but there's a lot of misinformation online as well. So, really, it's like, super, super important for people to think critically about what they're reading and don't trust everything you see on the internet.
Dr. David Harmon: Absolutely, absolutely. Anybody can write and put something on the internet, so everybody should be a little bit skeptical. Just a little bit of skepticism will help you.
Jordan Tyler: In an age where information—and misinformation—spreads faster than ever, it's never been more important to be an informed, discerning advocate for your pet. Not every headline tells the full story, not every study carries equal weight, and not every platform has your pet's best interest at heart.
Dr. Stephanie Clark: Well said, I don't think we could have said it any better. And today's conversation reminds us of a vital fact—science is how we learn, evolve, and improve our lives and the lives of our animals through research that stands up to the scrutiny. But even as that body of knowledge grows, it's up to all of us to engage with it responsibly.
Jordan Tyler: Here at BSM Partners, we take that responsibility seriously. Our team is committed to advancing science-backed nutrition, veterinary care, and innovation across the pet industry. Because your pet deserves nothing less than the truth, even if it's a little complicated.
Dr. Stephanie Clark: So, slap on that superhero cape. If today's episode has left you feeling more empowered, here's your challenge. The next time you see a claim about pet food or health, pause and ask yourself, “Hmm, where did this come from? Who conducted this research, and was it peer-reviewed?” Take the framework we provided and apply it to the research to get answers, or try out one of the tools we've mentioned, which are all linked in the show notes below.
And most importantly, remember—you don't need a PhD to think like a scientist. You just need the curiosity to question, the tools to investigate, and the heart to do what's right for your pet.
Jordan Tyler: Thanks again to Dr. David Harmon for joining us and sharing his wisdom. We'd also like to thank our dedicated team: Ada-Miette Thomas, Neeley Bowden, Kait Wright, Cady Wolf, and Dr. Katy Miller. A special shout out to Lee Ann Hagerty and Michael Johnson in support of this episode, and to David Perez for our original music in the intro and outro. See you next time.
Jordan Tyler: [Blooper] Misunderstanding general... I was doomed from the start. Generalize… generalizability. Yeah, generalizability. Yeah! That's a word people say all the time.
Dr. Stephanie Clark: Every day.
Jordan Tyler: I'm definitely not saying it for the first time right now. Um…