Artwork for podcast Prompted: Builder Stories
GTM Orchestration Explained: The Bowtie Funnel and the Control Tower Model w/ Jomar Ebalida - Ep 37
Episode 3710th February 2026 • Prompted: Builder Stories • Agent.ai
00:00:00 00:21:39

Share Episode

Shownotes

AI agents are everywhere right now. But most go-to-market teams are experimenting without a clear sense of control, governance, or coordination.

In this episode of Builder Stories, Kyle James sits down with Jomar Ebalida to unpack why go-to-market does not have an AI problem. It has an orchestration problem.

Jomar introduces a powerful mental model built around the Bowtie Funnel, a looping view of the customer journey that connects marketing, sales, customer success, and expansion into one system. On top of that system, he explains the need for a control tower, a centralized way to see, manage, and guide AI agents across every stage of go-to-market, with humans always in the loop.

We explore:

  1. Why buying AI tools first and figuring it out later is a backward approach
  2. How the Bowtie Funnel becomes the map for orchestrating agents across the full customer journey
  3. The control tower mental model and why pilots only fly the plane 11 percent of the time
  4. The difference between basic workflow automation and reasoning-based agents
  5. What “human-in-the-loop” really means and how reliability compounds over time
  6. Why the GTM Orchestrator role is emerging as the next evolution beyond RevOps

This conversation is for anyone experimenting with AI in marketing, sales, or customer success who feels like things are moving fast, but not always in the right direction. If you want a clearer way to think about AI, governance, and scale without losing trust or quality, this episode gives you a new map.

Learn More and Connect with Jomar

  1. Connect with Jomar on LinkedIn: https://www.linkedin.com/in/jomarebalida/
  2. Learn more about Bowtie Funnel: https://bowtiefunnel.com/

Transcripts

::

If you don't know what great looks like, how can you make greatness come out of these things?

::

What they realize as they're pushing all these AI, they're creating chaos because now the customer experience is so different.

::

They're not playing off of each other.

::

What they're doing is they're going to cut out the number of daily task people actually doing it and then hire expensive engineers.

::

So then they're actually increasing your costs for everything.

::

Aviation has already come up with this, like already came up with this idea.

::

They have.

::

Co-pilots.

::

Literally co-pilots.

::

All these agents have already existed as an aviation.

::

We're not trying to reinvent something.

::

But another thing with aviation is the control tower.

::

So think about like each agent having a destination.

::

And then the aviation tower is basically able to see all those agents go around like the different go-to-market stages.

::

And that's how I see it now.

::

And so if you think about it from the aviation industry's perspective,

::

You could trust your life 30,000 feet above ground with this same thing.

::

I'm pretty sure you can now trust your go-to-market.

::

You just have to orchestrate it.

::

You need an orchestrator, the control tower operator, that understands the entire go-to-market stages, those compounding growth loops, and then able to help you navigate it.

::

My guest today is Jamar Ibelida, a go-to-market system thinker who believes most teams are approaching AI completely backwards.

::

While many companies are racing to deploy agents and co-pilots, Jamar argues that the real problem is not intelligence, but in the orchestration layer.

::

Jamar frames go-to-market through his bowtie funnel, a looping model that treats acquisition, retention, and expansion as one system with feedback loops and continuous learning baked in.

::

But before we start, would you do me a little favor?

::

Would you give that subscribe button a tap?

::

And if you hear anything in the conversation that you really relate to or completely disagree with, please leave it in the comments so that we can learn from you too.

::

And with that said, let's dive into this conversation.

::

Joe Marr, so a lot of teams are experimenting with AI right now, but very few feel confident and like they're in control of things.

::

Like, I know you're working with a lot of teams.

::

Why do you feel like, what is broken in how companies are approaching AI in their go-to-market today?

::

Well, I feel like people are just reacting super fast.

::

And I know from the top level, everyone's like applied AI now.

::

And so they have a lot of pressure from upper management and from their investors to apply, you know, AI into their day-to-day tasks to basically remove

::

a lot of repetitive tasks that AI can do.

::

But for me, I feel like that's one of the major things is they're acting so like, hey, I'm just going to buy the most enterprise copilot for ChatGPT, the most enterprise Claude for our company, and that will be enough.

::

We'll hire the best engineers and they'll figure it out.

::

I feel like that

::

is something I keep seeing the most out of.

::

And then what they're doing is they're buying all the technology and then hiring experts.

::

And then they're hiring experts like, we have these technology, make it work because this is what our budget has.

::

And it's like, that's kind of like the backwards approach that they're doing.

::

And then another thing I'm seeing is people are, because it's broken into teams, they don't have a central source of AI, like a AI center of excellence.

::

And so everyone's buying tools.

::

for separate problems.

::

And so mis-coordination throughout the entire different departments.

::

Like, oh, I have a pain point here.

::

Let's buy the tool now.

::

And since everyone has like budget, like I think some enterprise companies below 10,000, they're ready to go with a credit card.

::

You know what I mean?

::

And that's enough for a lot of people.

::

But what they realize is as they're

::

pushing all these AI, they're creating chaos.

::

Because now the customer experience is so different.

::

They're not playing off of each other.

::

And so that's what I'm seeing.

::

I'm seeing a lot of different factors, but those are the major factors I'm seeing.

::

They're buying tools that are not really the right tools to buy.

::

They don't understand the data sources and then they're hiring engineers.

::

So what they're doing is they're going to cut out the number of like daily task people actually doing it and then hire expensive engineers.

::

So then they're actually increasing your costs for everything.

::

So it's very interesting to me.

::

So yeah, they've got budget for it.

::

This is something they can spend on.

::

So it's like, let's just go buy something and figure it out.

::

It sounds like a lot of experimenting and testing, but they don't really know what they're trying to experiment and test.

::

And it sounds like that's probably where the backward piece is really coming from.

::

For me, I also see a lot of people that they don't even actually know what they're interviewing for or the right talent to basically obtain for their team.

::

And then they're basically just giving people these long case studies of, like if they're interviewing for someone, they're sending them these long case studies.

::

I've done some, so I can attest I have the receipts to prove it, 'cuz I've done them, and what I've realized when explaining some of these use cases and these even project scope and deliverables is that...

::

The people grading them, the people looking at them, don't actually know how to grade them and don't actually know what excellence is because they've never done it before.

::

They're not technical enough.

::

And then they're asking their AI engineers or another, their machine learning individuals to help kind of figure this out and other experts and systems people.

::

But then those people actually lack business strategy or the other ones lack technical strategy.

::

So if you don't know what.

::

Great looks like, how can you make greatness come out of these things?

::

Exactly.

::

They don't understand the systems.

::

So it's very disparate across different teams.

::

And so if they don't like your answer, like if you would do a, it's, I call it hackathon interviewing or hackathon use cases, because it's basically hackathon.

::

It's a hackathon AI.

::

Can you

::

prove that you can solve this problem in this pilot phase.

::

But what they don't understand is that there's so much data points that go into actually having that into production.

::

There's so much fine tuning.

::

Like they don't have, most of them don't even know what neuro symbolic agents are, which is better for enterprise.

::

A lot of people don't even understand that.

::

And so for me, it's almost like I'm fighting

::

with also people's lack of education.

::

And that to me makes it very difficult to actually get new clients that would understand this.

::

So let's talk about the education then.

::

You have put out a whole framework, the bow tie, kind of what you call the bow tie framework, to kind of do this education.

::

How do you think about the go-to-market function all together and why those pieces need to be kind of orchestrated together and how AI can be done right?

::

Winning by Design, they have a bow tie method, period.

::

So they have like the

::

different stages, right?

::

They have the awareness, education, selection, mutual commit, they have onboarding, retention, expansion.

::

And then that's a big loop.

::

That's actually like, instead of a linear awareness, education, like it's very linear, it stops there.

::

This one goes all the way to like expansion.

::

How do you get that impact for that customer for it to go loop back around?

::

And then

::

There's also concentric loops within those, like awareness.

::

Oh, it's like Instagram videos, right?

::

You see someone seven times, you educate them, and then you start educating them after you see them seven times, they start selecting your brand as the most trusted source, and then they go ahead and buy your product.

::

You know, depending on the price of that product, if it's a lower cost price, if it's less than probably like $1,000, people will most likely buy it without like a console, even $10,000, right?

::

They don't need a big

::

to decide, but there's going to be smaller concentric loops around that customer journey that will increase it.

::

And so for me, that's how I saw it.

::

And I just layer that with signals.

::

Like what are the customer signals throughout the entire customer stages?

::

So then I start layering things on.

::

So I start having like one inject massive swim lane, layer that on with signals, layer that on with teams who's in charge of those different stages, layer that on with technology, and then start looking at the task

::

tasks throughout each of those stages to see what can be done by agents.

::

So you're using those signals to not only trigger agents, but also trigger teams to act upon those different things.

::

And that's how I see go to market, right?

::

I see it in signals, triggers, and I use agents to basically escalate the different tasks that need to be done between those stages.

::

And it sounds like the big thing, the big difference, but what you're talking about to where the backwards things companies are like, take a step back, look at the entire funnel.

::

And the way I,

::

I've seen from your bow tie funnel, the way you describe it is, it's a sales and a marketing funnel put together, and that's where the bow tie comes from, right?

::

It's not one or the other, it's and.

::

once you have all of that mapped out for an entire company, you could then start looking at these loops and then you can start thinking like, all right, what makes sense to kind of egentesize instead of like,

::

here's a tool, we're gonna start using it.

::

And I'm like, where does it fit in here and what's its purpose, right?

::

Is that kind of the big unlock?

::

Exactly.

::

Yes, so it's not even just sales, it's also customer success.

::

It brings in marketing, sales, customer success, account reps.

::

It brings in the whole gamut of go-to-market.

::

And so for me, having that sense of understanding makes everything so much more intentful.

::

Like every single tool has an intent, has a signal, has an ownership of who owns

::

that tool.

::

Because right now what I'm seeing is a SAS massive wave.

::

Because everyone could build vibe code, a freaking point solution tool.

::

Now, if you want an ICP tool, I could vibe code that for you in an hour.

::

I am seeing a prolific amount of people making so many SAS tools.

::

Like there's thousands that are coming out.

::

like every week.

::

I can make one probably with my toes without even trying to be honest and launch it.

::

Like that's how execution has gotten so much commodity.

::

It's like, so it's like everyone can literally do it now.

::

But what people need to understand is that you need governance.

::

You need an orchestration layer on top of this.

::

That's where the most important thing comes on now.

::

When point source SaaS solutions become so

::

Trivial, you need systems and you need governance layer to organize how those signals are working throughout your entire customer journey.

::

It's interesting.

::

too, because you said this at the beginning, you're saying it again now, silos, right?

::

And we know an organization, especially as the company gets bigger, departments have silos around them.

::

And it sounds like what you're identifying here is that's also a problem for the AI to have the context

::

about this whole funnel and how things match instead of just solving this whole thing, this specific thing.

::

A lot of times you do need that cross-departmental context and how do you share that?

::

So it's almost like we have to not only break down the silos for individuals, but we also have to break them down for these agents too, right?

::

And these AI tools?

::

What the research I've been looking at and the practices that I've been seeing is that breaking them down by ARR.

::

And depending on what stage they are in their company, that's what you should focus on.

::

And those are the agents that you need to be deploying.

::

And so not only just ARR breaking, but also the stages.

::

For me, I have an orchestrator that sits on top of all my sub agents.

::

And all of those sub agents are broken down by the bow tie funnel stages.

::

And then most of those stages are like, if say,

::

the awareness function.

::

That's more like research.

::

That's some more like signals ICPs.

::

Like, are they looking up to my website?

::

Are they like, these are different types of signals.

::

And so those don't really need more like human intervention.

::

Human intervention is like.

::

Is the procurement being sent out?

::

Do I need to look at this procurement?

::

Like, I need to go see how I can communicate with this person.

::

Like, I wouldn't trust an agent to do that.

::

I wouldn't want someone that understands the different types of complexities, the human empathy, and why that person has trouble accessing your platform and tool.

::

So for me, it's...

::

it comes into different signals, also need different human intervention, and where those stages come in at the basically the latter part of your bow tie funnel stages or customer journey stages.

::

So talk to me about, right, with these agents, where do you draw the distinction between like, this is just a good workflow versus this is something that needs to be turned into like a reason-based agent?

::

Like, how is that shift happening?

::

And one, how do you decide

::

if it's okay just to be a workflow?

::

And how do you see companies like making that change?

::

If it's a very yes or no workflow, like this is yes, or take this information and summarize this type and then send it to me, that's trivial.

::

Like that to me is like, okay, I'll just make a workflow for that and that will work fine.

::

But if it's

::

different workflows that inquire multiple types of factors and variables.

::

I break it down to directive, what is the agent doing?

::

Or like orchestration, what type of agents need to support that one like objective?

::

And then execution, what tools do they need?

::

And then learning, how are they learning per execution?

::

And then governance, what are the governance needed?

::

Like are they masking different API fields?

::

These are things that need to be taken into consideration.

::

So as you're

::

Agent learns it's building compounding reliability, and it's learning reinforcement on those actions.

::

When we last talked, you talked about GTM orchestration and kind of gave it the whole analogy of an aviation tower.

::

Like, walk us through that, because I think it's such a good way to think about it that I haven't really thought about before, but you do such a good job of explaining it that way.

::

For me, I'm like, how do I...

::

Like aviation has already come up with this, like already came up with this idea.

::

Yeah, they already got co-pilots.

::

They have co-pilots.

::

Literally co-pilots.

::

They have thousands of co-pilots.

::

All these agents have already existed as in aviation.

::

We're not trying to reinvent something.

::

And for me, I understand that humans are very good at judgmental.

::

They're not very good at pattern recognition.

::

I'm very good at understanding industry patterns and how can I apply that to something else just because I've worked in so many other industries and like that and that thing worked in that industry.

::

I had copy over here.

::

And so I realized that with the aviation industry, they've already figured this out.

::

But another thing with aviation is the control tower.

::

So think about like each agent having a destination.

::

So each agent has a certain task within the go-to market.

::

stages.

::

And then the aviation tower is basically able to see all those agents go around like the different go-to-market stages.

::

And that's how I see it now.

::

Having that metaphor and having that type of idea in my head makes everything so clear to me because now I'm like, okay, the human judgment, human in the loop are basically the landing and takeoff.

::

Like what are

::

high critical things where human judgment is required and which ones are not.

::

And so that puts, that shapes the entire go-to-market signals for you.

::

And so if you think about it from the AV industry's perspective,

::

Most people, you could trust your life 30,000 feet above ground with this same thing.

::

I'm pretty sure you can now trust your go-to-market.

::

You just have to orchestrate it.

::

You need an orchestrator, the control tower operator that understands the entire go-to-market stages, those compounding growth loops, and then able to help you navigate it.

::

And that's where I think it's going to start changing.

::

So it's this GTM orchestrator is the person who's like constantly probably bringing in more of

::

those data points to figure out how does this tune everything, right?

::

And then do you see a lot of companies hiring for this GTM orchestrator?

::

Because a lot of companies are still trying to figure out this RevOps role.

::

And this is like another evolution or iteration of that.

::

So how does this all fit in?

::

And how long is this going to take?

::

And maybe even most importantly, what do you think are the most important skills, skill set for this person to really be successful?

::

The word architect is being thrown around.

::

What I'm seeing is that RevOps leaders

::

they're expected to understand AI agents.

::

They're expected to like understand all the signals and stuff and how does this all fit?

::

And it's a bit overwhelming for them because they have their day jobs.

::

They probably got kids.

::

They like, they don't got time for this.

::

You know what I mean?

::

And so for me, because I have this analogy of aviation, every time a new LLM just goes, I'm just swapping out parts for my planes.

::

I'm getting a faster plane.

::

I'm getting a faster engine.

::

I'm just getting more fuel for my AI because I have the orchestration

::

layer and governance, new AI tools don't scare me.

::

Do not scare me because I know how all of those agents are working together.

::

I am the control tower for those different agents.

::

And that's how you don't build fear into AI.

::

You become the orchestrator of those agents.

::

Well, you kind of alluded to it, so let's go there, kind of your own platform that you're building.

::

I know that you're in the final stages of kind of launching your own book kind of about the orchestration layer.

::

Tell the audience a little bit about that.

::

If they want to go deeper into this, where can they find you there?

::

And how can they get access to these resources and these frameworks that you're putting together to guide them on this?

::

My domain is bowtiefunnel.com, so they can just go to bowtiefunnel.com.

::

The first link will be there will just be the book, and you'll find all the worksheets and the book there.

::

And it's, to me, because

::

I'm trying to be very loyal to my SMB clients and SMB friends that own small businesses.

::

I almost think it's a love letter to them, because they are the most affected by AI, because they don't have a good runway rate compared to these enterprise companies.

::

They're one mistake from closing shop.

::

So my go-to-market command tower, yes, I'm

::

catering for small and mid-sized companies, but it's like a love letter to them for thank you for keep going during uncertainty.

::

Like because I had a system in mind of how I wanted a tool to operate, like an aviation, including a human in the loop, each agent knows what tool, who owns it, the confidence score, like how much cost it would be to run this agent.

::

It's perfect for SMB mid-size that are sensitive to cost, sensitive to understanding like what do I approve, what do I decline, you know, escalating that

::

human in the loop task.

::

The product isn't amazing.

::

It's not not amazing in terms of like revolutionary.

::

It's just adding governance layer to tasks that should be done.

::

You know what I mean?

::

Like, so for me, I don't feel like I'm building anything super amazing here.

::

I'm just building something that everyone should be using.

::

It's a scaffolding to use the things that you want to use.

::

It's a scaffolding.

::

I'm like build that compounding reliability.

::

Like for most of like small clients that I have, like I tell them, make your own e-mail for your agent, have them help.

::

help you.

::

And then to have them read your e-mail, have them understand your brand, your tone, give them examples of past emails that you've written and have them compose and draft that e-mail for you.

::

But your front desk or you get the final push of send.

::

They did all the work for you.

::

You just need to send.

::

That's how it should be, right?

::

Like for me, like that does, that's what I do not trust a random agent that only can has 50 outputs.

::

Sending to my brand, like you need to protect your brand, you need to protect your company.

::

One kind of final kind of question, I'll let you kind of have the last word here.

::

What is one thing we didn't talk about or one thing that you've been thinking a lot about that people just...

::

need to wrap their head around when it comes to AI and agentic and orchestration of all this together.

::

For me, it's like really empowering your team to, yes, it's fine to experiment.

::

It's fine to be imaginative because there's so many things that people can come up with that is so innovative.

::

But at the end of the day, you also need to think about big picture and what does that mean for your organization and team and how realistic is it?

::

FOMO is so high right now.

::

It's at the highest,

::

I've ever seen it in my entire career.

::

So be realistic and remove that ego of like, I need to build this.

::

I can, I need to own this.

::

It's like, it's still a very much team dynamic kind of situation here.

::

People still need people.

::

Imagine that.

::

Yeah.

::

And the people that are going to be using these agents, like I said, they're pilots.

::

And I hope that really is,

::

Like that sense of aviation analogy really sticks with listeners today, because that's exactly how I see it going forward.

::

Love it, love it.

::

Well, Jomar, thank you so much for joining me for this conversation.

::

All of you out there, if you heard something that struck a nerve with you, leave a comment, tell us about it.

::

Let's keep the conversation open and I'll make sure in the show notes you'll have access to all of Joe Mar's leaks so you can go check out his book, check out his framework, and play with his toys, his tools if you want.

::

So get it there and keep learning, keep your ego at the door.

::

And until next time, don't stop playing with these things, guys.

::

Take care, everybody.

Links

Chapters

Video

More from YouTube