Is your team outsourcing their thinking to ChatGPT? In this video, I break down why relying on AI for answers is leading to "thought atrophy" and killing expertise in the workplace. While AI is an incredible tool for efficiency, it cannot replace the nuance, context, and experience that I hire my team for.
I share the story of a recent project where an AI-generated response missed the mark, and I outline the 4 New AI Guidelines I’ve implemented to ensure we use technology to amplify our intelligence—not replace it.
//
Welcome to Repeatable Revenue, hosted by strategic growth advisor , Ray J. Green.
About Ray:
→ Former Managing Director of National Small & Midsize Business at the U.S. Chamber of Commerce, where he doubled revenue per sale in fundraising, led the first increase in SMB membership, co-built a national Mid-Market sales channel, and more.
→ Former CEO operator for several investor groups where he led turnarounds of recently acquired small businesses.
→ Current founder of MSP Sales Partners, where we currently help IT companies scale sales: www.MSPSalesPartners.com
→ Current Sales & Sales Management Expert in Residence at the world’s largest IT business mastermind.
→ Current Managing Partner of Repeatable Revenue Ventures, where we scale B2B companies we have equity in: www.RayJGreen.com
//
Follow Ray on:
If someone on my team is feeding me an AI answer when I ask for their opinion, then I don't need them. And this video is about four guidelines that I put in place for my team and my company, and why I think lazy thinking is actually killing expertise.
Before we get into that, you know it, I know it: a lot of teams, a lot of companies, a lot of people, experts, like so-called experts, are outsourcing their thinking to AI. And you ask them a question, or you ask for input, and it's just getting plugged into AI and it's feeding you back garbage. And they think that's a shortcut to like actual thought. They think that's a shortcut, like an easy way to get something qualitative back, and it's not. It is laziness. And I'm watching people develop an atrophy of thought, like an atrophy of thinking, as we speak.
So a couple weeks ago, I wanted input from my team on a new sequence and a new campaign and a better way to do outbound or prospecting. And so just for context, everybody on my team has sales management or sales experience, right? Like we do fractional management for IT companies. So I have dedicated sales managers, I myself am a career sales manager, we have coaches, we have like everybody here should have some experience in developing outbound campaigns and scripting and emails. And what I wanted to do was look at what we were doing today and what we were helping IT companies with and other MSPs, and rethinking it. Saying, "Okay, hey, should this be modernized? I mean there's a lot of things happening with technology and AI and, you know, digital marketing. Like at what point should we look at this and rethink and retool the way that we're coaching and managing our BDRs and SDRs for our clients?"
And I did a lot of research, right? And frankly, I used a lot of AI. Right? I said, "Hey, GPT, run some deep research on some stuff. Claude, run some deep thought, whatever the hell you call it." I also used my 20 years of sales management experience. I used the data, like I started pulling through... you know, we've got 50-plus MSPs in our our clientele right now. So I'm like looking at the data here and saying: "All right, who's who's winning? Who's not? Like what's working? What are their offers? How are they how are they prospecting? How many touch points? It like what's the cadence? What's the what's the campaign? What's the offer? What's the structure?" And what I wanted to do was just like pull this all together and be able to go back to our clients and say: "Listen, we've done a ton of research, plus applied our own experience and our own thought and then the data that we have, and we've said let's go with this for a period of time. Like let's put this to market. Let's start using this."
I did all this, I sent it to my team, I sent all the research—right, I even like all the AI stuff—I said, "Listen, this is what I'm trying to pull together. Here's my recommendation based on what I've done and what I know. Here's what I recommend that we propose to to our clients. But, I hired you for your brain. Right? Like I didn't hire you just to to be like to to check some boxes. Like I hired you for your brain. That's how we do things in our company. And I'd love to know like, what do you think? Like where does this suck? Like where can we improve it? What's better? What are you hearing? What are you seeing? What are your friends saying? What are other industries doing?"
And I get back a one answer from somebody on my team that is... it's straight GPT. Right? And you you know this. Like if you if you work with AI enough, you will start to learn very quickly what is AI. And AI stuff starts to feel like... man that sounds good, but it seems fluffy. Right? Like it... I don't know... like you can just smell it a mile away. Like I'm a I'm a power user here and I'm like... this just has like all of the the acronyms and the the language structure and the empty words... like a lot of stuff but just like nothing deep, right? And I said: "This is just straight GPT."
Which by the way, I can use. In fact, I gave you all of the GPT. I gave you all of the Claude. I gave you all of the Gemini. I gave... like I did that part. Like I can go to a prompt and say like, "let's engineer like the right thing to get the right output to get the right data." But when I ask for your opinion, what I don't want is a GPT answer. I want your opinion. That is why we hired you.
I love AI. Like it's... like I I have I have an AI device, I record a bunch of shit, like I am in AI all the time. Like I am not a like an anti-progressive on this thing. Like we are fully embarked into this, we are using it for for a whole bunch of shit. But when I ask for your opinion, when I ask for your expertise, when I ask for your thought, the last thing that I want is to get GPT shit...
Here's why: AI is basically just one big predictive text model. That's all it is. It's just on a massive, massive scale. So think when you're typing in your shit, like on on text, and it predicts the next word. Well that's just AI on a huge scale, like aggregating all of the data and all of the patterns to put together a pattern of words that it thinks, on average, you want or that's the right thing. And that is a... it is a... recipe for average. It means that is the most likely answer that you should expect from this thing.
And so when you think about it that way, you realize: it's not actually thinking. Like yes it has access to way more data than you you ever could, but it doesn't have the context. It doesn't have the real intelligence. It doesn't have all of the nuance and shit that goes into this. And I I can read a GPT answer and I can know. But there are a lot of people now that feel like it's a shortcut. That instead of deep thinking, instead of sitting down and going "I don't know... like what do I think? What's the right answer here?", what they do is they just go plug this shit in. Like, "I don't know, what do I think?" And they read what it thinks and they copy and paste it.
It's fine for a lot of people. It's not gonna be fine in my business because we hire for a very specific reason: like a culture, an experience, a background—sales expertise is one of them. So, I said: Listen, I'm just gonna address this. There are four guidelines that are going in place for my business. Here they are.
First is: We are only using closed systems. If we are plugging in our templates, if we are plugging in our models, if we are plugging in our frameworks, our training, all of our content, stuff that we have worked... that I have worked for 20 years, that we have worked for a long time to to aggregate, to pull together... that we know works, that curates just the best of the best shit... what I am not comfortable doing anymore is feeding that to everybody else. The team models that don't train the public. And anything that is related to MSP Sales Partners or repeatable revenue now needs to go through that. And that closed system allows us to leverage all that's out there without continuing to train the models and without lever... like without essentially sharing with the world what we're thinking and what we do. Because frankly that's kind of how we get paid.
Guideline two was: Do not feed me AI. Right? Like if I I know how to AI. I know how to fucking prompt. I know how to write shit. In fact, I'm really good at it. I know how to prompt AI to give me objective answers that challenge my existing thinking that make me think differ... like I know how to do that. If that's what I wanted, I would just continue to ask AI. When I ask you, a member of my team, for input or for feedback, what I don't want is an AI answer. So, guideline number two was: If I ask for your opinion, if I ask for your input, if I ask for your thought process, do not give me an AI answer. Right? Don't take the shit from GPT. If you want to use it to refine your answer? Cool. If you want to use it to double check your answer, reword your answer, clarify your answer? If you want to like record a long voice note into it and have it, you know, essentially say "hey this is a smoother, better version of what you just said"? Completely cool. You can use AI. But if I want your thought, if I ask for your thought, it had better be your original thought. And what you think. Maybe you go to AI and have it challenge your opinion. "Hey, this is what I think. Am I wrong? Like what do you think?" Cool. Let me come back to it. This is what I think, Ray.
Guideline number three is: If 25% or more of your message is AI, disclose it. I don't want AI thinking for us. But if you go to AI and you think you get a really good answer and you're like, "Dude, that's better than I would have said." Okay. All right. Cool. Like if you use it as a thought partner before you sent me an answer... "All right, cool." Then you say, "That's actually really good. I'm gonna steal that." All right, I'm cool with it. But disclose it. "Hey Ray, you know what? I gave this a lot of thought and, you know, I'm really not sure. So I plugged it in AI, thought about it a bit. It gave me some really good ideas. So here's what it suggested—or suggested. This is what I'm you know... this is what I'm thinking. What do you think?" Right? Just disclose it.
The fourth guideline is: You own it. If you send it to me and you have not read it through, if you have not clarified it, if there are dumb things in it, if there are c... like things that are just completely off the the map, then you own that. If your name is on the thing that hits send or that you send it over to me, like that is yours. You're gonna own that whether you're copy-pasting AI or not. Even if you disclose it. So what I want you to do is before you send that to me, I want you to screen it. I want you to filter it. I want you to clean it up. What I don't want to have to do is go through AI fluff and start filtering out and be like, "Hey dude, this... you know it said this." And be like, "Oh yeah, I totally missed that. That's fine when I copied and pasted." No. I don't want to do that. Like if you send it, you own it.
What I also said in this is like... those were the those were the core guidelines. What I also said in our guidelines are: Leverage the shit out of AI. Like leverage it for research. You know, just like I did. Like I actually fed it to the whole team. I said "this is a whole bunch of research that I've done. I've actually tried to distill it and curate down to the stuff that I think is most important for you." Yeah, that's... it's perfect for that. Uh, refine your messaging. Right? Like hey I've got this long voice note, like I said. You can plug that in, it can clean it up for you. I go on long walks all the time, I do like long voice notes and you like I rip through a whole bunch of things. I ask AI: clean it up. Like cool. Like make it messages that I can send. I gotta read it—like I remember I own it—but you can use it for that. Use it for editing. Like my newsletter always goes through AI. I used to have somebody like copy, you know, copy edit it and and proof it. Now for the most part I plug it in AI, it does 99% of the editing, and I I do one last check. Again, I own it. And look at it. Okay, cool. Good to go. Automating tasks. Automating repetitive things that you're doing.
AI can amplify good thought. It can make things really efficient. It can improve the functionality. It can improve the productivity. It can improve the efficiency of your entire organization. But it cannot think for you. And you do not want to develop "thought atrophy" of relying on it to do the thinking. AI is just a synthesis of what everybody else is thinking. It's just taking everything that's like been been written, everything that's been thought, everything that's been plugged out like published on the web and it's synthesizing all that information. It is a road, by definition—like mathematically—it is a road to mediocrity. It is a road to average. You will get an average writing unless you feed it a lot of context and a lot of information and a lot of your own thought.
Which is why if the average business owner goes to AI and says "Hey write me a good outbound script," guess what you're gonna get? Generic bullshit. It doesn't have the context and it doesn't have the nuance and it doesn't have the experience that's necessary to make it really useful and specific for you.
I see a lot of people, you know, relying on you know AI to think for them. And their their first instinct is to to jump to ChatGPT. And the the atrophy of thought is a real thing. I'm sharing this with you because I think, you know, that it's happening in a lot of businesses. It's happening with a lot of employees, especially in remote environments. And this is an opportunity like get ahead of it, address it. Like as an early adopter, we're kind of seeing some of the patterns of people that are using it that are that are later adopters and we're going "Oh hey, this is bad."
Now at the end of the day, this is not bad. For a lot of us, like you know if you in like sales consulting... like the more that other people do this, the more that we stand out. Right? Like the quality starts to stand out. Quality writing, quality advice, quality expertise, quality answers, quality employees, quality feedback, quality thought. That starts to stand out now.
My advice to to you: like don't don't fall victim to it. And if you're leading a business, leading a team, make sure that you communicate and create some clarity and some expectations with the team about how we do things here. And how we're going to leverage AI to make us better, to make us smarter, to make us faster, not make us dumber.
If you have feedback on it, drop comments below. Would uh love to to hear what you think. And if this resonated, feel free to jump on my list and get tips every week on email. Adios.