Artwork for podcast Data Science Conversations
Data Strategy Evolved: How the Biological Model fuels enterprise data performance
Episode 179th May 2023 • Data Science Conversations • Damien Deighan and Philipp Diesinger
00:00:00 00:56:37

Share Episode

Shownotes

In this episode Patrick McQuillan shares his innovative Biological Model - a concept you can use to enhance data outcome in large enterprises.  The concept takes the idea that the best way to design a data strategy is to align it closely with a biological system.

He discusses the power of centralized information, importance of data governance, and the necessity for a common performance narrative across an organization.

Episode Summary -

- Biological Model Concept

- Centralized vs. Decentralized Data

- Data Collection and Maturity

- Horizontal translation layer 

- Partnership with vertical leaders

 - Curated data layers 

- Data dictionary for consistency

- Focusing on vital metrics

- Data Flow in Organizations

- Biological Model Governance

- Overcoming Inconsistency and Inaccuracy

Transcripts

::

DD: This is the Data Science Conversations Podcast with Damien Deighan and Dr. Philipp Diesinger. We feature cutting edge data science and AI research from the world's leading academic minds and industry practitioners so you can expand your knowledge and grow your career. This podcast is sponsored by Data Science Talent and Data Science Recruitment Experts [introductory music]. Welcome to the Data Science Conversations podcast. My name is Damien Deighan, and as usual, I'm here with my co-host Philipp Diesinger. How's it going, Philip?

PD: Great, thanks Damien.

DD: Great. So often on the show we focus on a quite narrow data science topic, but today we're taking the big picture view on how organizations can be more successful in their data science and analytics strategy. And in the current era where many of us are tasked with doing more with less, it's a timely conversation to be having. Our expert guest on this crucial subject is Patrick McQuillan, who's joining us from Boston. And by quick way of intro, Patrick is an analytics executive professor and strategy consultant with immense experience in using data as a tool for change and critical decision making.

He received his MBA from Oxford University and holds a combined bachelor of economics and international affairs from Northeastern University.

He's held several senior leadership roles across various industries, most recently as the global head of data governance and operational effectiveness at Wayfair. And prior to this, Patrick led international consulting teams on driving data strategy and technology enablement for the Fortune 100. He also has worked with government agencies and higher education clients as part of that work, and he's even found the time to have served several posts as an officer for the United Nations, which I'm looking forward to delving into in the next few minutes. So Patrick, welcome to the show. It's lovely to have you here.

PM: Very happy to be here, Damien.

DD: We'll start as usual with your background and your story. How did you make it into the world of data from academia?

PM: It's a very interesting question and to be brief with that, it's funny, I only recently I think have bachelor's and master's degree programs been constructed specifically around analytics and data. And before then it was very heuristic. It was mainly teaching yourself learning on the job. And most of my fellow data scientists really took no more than one or two stats courses in university and most of their time was actually taught on the job. So similar happened to me, I graduated with studies in international law economics and when I did work with United Nations, I focused mainly on economic policy and eventually wanted to learn more of the quantitative aspects of that space.

And over time that [inaudible:

DD: Okay, awesome. And have to ask about the United Nations. Tell us how did that happen and what was that about.

PM: [Laughter] That was my first foray. Before I entered the world of data science, I had a very large interest in international security and particularly the laws of war. And so I'd gotten a brief fellowship working with the United Nations in partnership with the Graduate Institute of Geneva. And so I worked supporting some research for an arm straight treaty that was going through in Geneva through the United Nations and partnership with researchers at the institute. And eventually that led to a post in the New York HQ. And then eventually another poster, I was leading a team in New York HQ and just kind of came from there and I loved it, but love data more. So I made the switch.

DD: [Laughter] Great. You have a concept which I think is really interesting, you call it the biological model. So perhaps we start there with an explanation of what that is in the context of data strategy and organizations.

PM: Absolutely. What it essentially is, is the idea that all data and decision making power and resources should be consolidated into a single epicenter that's connected throughout an entire business or organization. And so it becomes a self-feeding system where it's able to quickly react, it's able to get ahead and predict any anticipated challenges of bottleneck based on what it's historically encountered. And most importantly, it continues to learn and provide basically an adjacency between access to data and the immediate access of that data by key decision makers in the organization rather than disseminating it with individual managers on different teams and having decentralized analytics hubs or centers of excellence that might exist across different verticals that don't communicate to each other.

PD: Patrick, maybe you can explain a little bit what's the difference between the biological model that you mentioned as a nervous system or, or neural network, basically compared to normal centralized data governance or data strategies or structures that you already find in like smaller medium sized companies, where you don't have the capacity to decentralize everything and because of that most of it is centralized already.

::

PM: Yes. So that's an excellent question, it's definitely begs distinction. So you're correct. The biological model is particularly named that way, just to highlight that point, because it simulates the central nervous system with the brains, the epicenter, and the key decision making component. And then the nerves that are essentially connected to different parts of the body collect information on what's felt in the fingertips, what's felt in internal organs, what's automated processes like blinking and breathing, similar to AI, versus non-automated conscious processes like reaching out and actually grabbing things and influencing the world all comes from the brain and the decision that's being collected there. And so what's very important in this model is more than anything we need to focus on a very sound and healthy data foundation. And so what happens is basically two things you'll typically encounter in industry where there's a little bit of an issue that needs to be resolved in that space.

The first to your point which is the obvious one, is companies that just have decentralized data centers and can't really communicate to each other and it's difficult to get the full picture quickly at the decision-making level. But to your point about small and medium sized businesses, where a lot of the time that is centralized, the issue there is that the data is for sure centralized, but is a nervous system actually healthy. So are they collecting the right data, is there enough volume to actually make decisions? Are they in a bit of a tight competitive pinch where they're trying to get ahead of some competitors and maybe get ahead of themselves before they have enough information to make a safe call? And if they don't have that information but they need to make a decision, is it all they're relying on, or are they reaching out to other non-data sources to help fill those gaps?

And I feel like that's pretty much what happens where you can still have the system working, but it's not necessarily flowing in the way it needs to. It's not fluid. There may not be as many neurons or not enough information collected. And in the sense of practical application, this could mean not enough data sources, it could mean not enough testing, or if you're trying to go to market, maybe roll out and scale up a new marketing plan without testing in the right markets first or doing it without finding statistical significance in actually the results you're finding before branching out and scaling that strategy.

Similar to supply chain optimization and pretty much end-to-end anything happening on the op side as well, where these companies just may be getting ahead of themselves with a less than preferred level of maturity for their data infrastructure where it is going to the brain and it is being collected, but it's incomplete. And they may have too much information on the arm, but not enough on the leg. And it ends up creating a system that overcompensates in some areas and under compensates in others, which becomes a very difficult habit to break as you scale.

PD: So the difference between the healthy biological model and then already somewhat centralized data system would be the decentralization of data and then translation into meaningful actionable insights and business actions. How about large organizations which are mostly decentralized, I would say if they're not like tech companies or data companies, you will find them really decentralized because that's typically how the company has evolved or grown organically. How capabilities have grown organically, so their own their own data. What would be the advantages of centralizing the data model for a large organizations?

PM: Yes. So that's actually exactly the reason why the biological model came about is because typically these organizations, to your point, because they need to scale quickly and the need is there, they'll end up creating different centers of excellence, which ends up becoming no center of excellence at the end of the day where supply chain will have its own vertical merchandising, marketing will have its own vertical and customer service will have their own vertical. You have all these different verticals across an org, software development tech, anything like that. And what you end up seeing is to your point, you have these different decentralized data centers. And so while that does operate to a certain degree of functionality within those verticals, with all of my experience on the industry side and consulting the industry side from the outside, I've yet to see a system like that work in the long run.

I think that you'll have individual leaders of those verticals definitely testify to that, but the people who report laterally to those leaders or the people who they report up to will always mention that there's going to be a knowledge gap because each leader is focused only on their lane without, and really understanding the context as much as it needs to be understood with how other silos may end up being impacted. And so this is particularly important at the leadership level. And when I say leadership for large companies, I mean C-suite and board level where they'll be asking for performance, they'll have to report this to shareholders, they'll have to make key decisions on a quarterly basis or even a monthly basis if something very, very critical is happening across the org. And it is extraordinarily frustrating for these leaders to basically say, "Great, we need to do something. How are we doing?"

And it takes them three weeks or a month and costs a lot of money to get all of the FTEs on the ground to pull those reports, which are usually substandard to what they could be if it was centralized. And the reports are usually contextualized only within the avenue of that particular vertical. And so the benefit of having a fully centralized system is, again, we have instead of eight little brains with one reaching out to the finger, one reaching out to the leg and collecting that solo information and then having a delayed communication to each other basically to make a decision to tell the body to walk. Instead, what we're doing is we're saying, collect all these neurons, all these touchpoints, all these decision making components, and all the key decision makers there.

So we have the brain, which again collects our data, but is also where our consciousness is and where we make these decisions to have both the data and the decision makers all in one place, put it out the head of the organization, and loop all of those existing lines of data collection and communication into that central base. So you can still have the silos, not saying deconstruct those, but you need a leader in place, whether it be a

CTO, a CIO, or some sort of VP of just efficacy, or business intelligence, or something like that, to take each of those verticals and create a horizontal translation layer where their input can be leveled out, presented easily and readily to senior leadership. And that person can also be a partner to senior leadership and the other vertical heads to basically make those decisions and have a hand in each cookie jar, so to speak.

And it's a very simple glue and at the end of the day, it's costs only as much as maybe one additional leader and a small team of two or three to just help navigate that environment a little bit to create some glue, grease the gears, develop a diplomatic relationship with each of these verticals, and then set up what is usually at that level pretty simple framework of data collection and reporting that can just keep folks accountable, increase transparency, and ultimately make the organization remarkably more effective with very little lift or cultural change.

PD: So you mentioned basically that this impulse needs to be coming from the C level. In your opinion, what's a good way to convince leaders in verticals to kind of share or give up a little bit the ownership or the insights that come from their data?

PM: Absolutely. So I think that that is for sure a pain point that some teams encounter, particularly with your point large organizations. And with that, I think most of the reason to your point is to, most of their pressure is that they don't want to relinquish control not for any sort of selfish reason, but because if they don't have full control over something, then it may make them a little nervous about performance of that vertical. And so usually the way these partnerships are constructed, and the way I've seen them work in the past, is that there is no actual relinquishing of control. It's more of a partnership with these different vertical heads, or with that key translator, or team of translators at the leadership level who basically say, look, your name is going to be on this report one way or the other, and it's costing you guys $40,000 a month to chase down and pull these metrics together.

And that's going to be almost $500,000 a year per vertical, going into pulling these reports. Why don't we save ourselves a few million dollars, get a small team of three or four folks to stand this up, and you present it as a partnership. If we say you're going to have more time to innovate, more time to chase down projects, you're going to have fewer bottlenecks in your work stream if you allow us to help assist with that reporting. And again, your name is going to be on this, so we'll be able to share this up to C-suite, and instead of having C-suite ask questions like, what is this and what's happening, which are typically unpleasant surprises.

Instead, it's going to be something wonderful is happening and our name is on it and this looks great, or something suboptimal is happening, but we've already gotten ahead of it and we can already speak in greater detail, maybe even issue a report before this large meeting or before these big reports go out to the C-suite to get ahead of it. It increases efficiency and creates more confidence. And whether something's going well or not as well from a reporting standpoint for a certain vertical, they always come out looking better because if that thing was going to happen one way or the other, this partnership typically allows a little bit more presence and being able to anticipate it.

There's an external partner who understands that they're working with on the reporting side, who understands maybe how it will affect different verticals in the org, maybe if it's a supply chain issue. And software engineering needs to be roped in that they need to reach across the board, someone who can help manage that relationship, manage both of their work streams so that supply chain doesn't have to worry about software's work, work stream and vice versa.

So there's a third party to help the two of them collaborate and navigate those waters a bit, and eventually roll up what would ultimately be the same solution that they would normally roll up, but better. It would be reported more quickly and agilely, they'd be saving more money instead of chasing down reports and trying to get people on the ground hopping off of projects to issue certain ad hoc or recurring reports for senior leadership. And it ultimately creates a little bit more of a broader impact across the organization that can be sustained and over time scaled, I think quite easily.

PD: here i

PM: Yeah. So that's an excellent question and I think it depends on the size of the organization. So I think if it's a large organization, because data scientists specifically, I believe that data scientists and engineers are best working closest to the context of the individual data that they're particularly responsible for. So I will trust the data scientists and the engineers on those individual verticals to give me the best insights to construct the best models and have access to I think more granular and deeper metrics that maybe we're not collecting at an executive level that they can be trusted with. So when it comes to a large organization, I believe in the data engineers for each vertical, maintaining that vertical's entire cloud base, which might include 20 KPIs that reported upward, but maybe an additional 500 that are reported just within that team that external teams don't need to worry about.

And then similar with the data scientists on those teams. I think developing, I think it's dubious and tenuous ground to, in my mind with a large organization, have a data scientist or an engineer full-time at kind of that executive or near executive level for the curated layer. Strictly because context is lost, and when context is lost from a reporting standpoint, it's easy to just get that understanding and chat with folks, you get qualitative context. But when it comes to inheriting code and looking at multiple, multiple years of analyses that might feed into a model, it gets very nuanced. And I think it would be duplicative and not beneficial to the org to have those particularly technical roles at the high level.

There might be some technical adjacent roles where you have someone who has the ability to pull the data or someone who can vet the data science models. But I think the actual construction of those and the building of those at a large org should remain in the verticals because they're closest to the context of what they're engineering. But with the small organization, if there's not too much data and the org is quiet, let's say small, maybe the team is maybe under 500 people the entire org, maybe under 1,000, even if you have the infrastructure for it, then I think it makes sense to have your engineers, data scientists, and this reporting team all in one place. Because at that point the scale of the organization isn't so unwieldy that each team is almost its own business, and instead they're just large parts of an even greater hole.

And I think it's easier when you think of scaling to actually have, I think those teams at a higher level just to be able to manage information without it being lost if the company continues to grow, managing models that kind of have their foot in each area. And as the company grows, if you need to hire specialists on those teams, you can. But I think for just thinking of a skeleton crew or a company where things are moving very quickly, it's good on a smaller team to have all those folks in the same place, I feel.

DD: Is there a size of organization, Patrick, where the decentralized model doesn't really work?

PM: I'll be careful with naming specific size, 'cause I know that the size of, let's say, a software company might be very different than the size of a retailer, and so they may have very different structures and the way that those teams are built. But I think generally, to answer your question about there being a threshold, I think that every company of every size is best having that translational performance layer for the C-suite. I think that the decentralized model, it works, it works, businesses run on it, but I don't think it's particularly efficient from what I've seen.

I think it's more expensive. I think that it can exist in the way that it does and it should with large organizations, and when I say large, let's say as an arbitrary number, maybe company of 500 million or more. But if it is going to exist in that large org at this point where you need individual contextualized teams working in each vertical, you still need that horizontal team. So I think to summarize it with a small organization, you just want that large horizontal team to manage everything and help build up that scale. And eventually, because you're starting early, you're focusing on good data from the beginning, that's your mindset instead of the model, you are focusing on getting good data, and keeping it clean, and building an infrastructure and an ETL process that's automated, that's clean, putting in a good cloud database and storage format, all of that is so important.

And it's almost like saying that we need a car and you're focusing on the engine, but you forget to put wheels on it. And the engine's great, but the car won't drive. So you really, really need that. It's like nurturing a young brain, right? You want to feed at good things so that it can grow and develop, even if it doesn't know what it's going to become. But when you get to the higher level, and I think someone's getting, let's say to give a biological example, someone's getting older and maybe their brain is starting to reach a point where it's less elastic, neuroplasticity is lower, more is happening with the body, bones are more rigid.

There's probably going to be more health complications later on just 'cause of the age of the body and what it's been put through in a competitive space, just growing and living, that you still want to collect that information from each location and it's actually important to get more information from each specified location more often. So to have those decentralized teams talking about it and really taking a deep dive with a magnifying glass on each part of the body, but it's rendered much less efficient if there's not a part of the brain that's listening.

And I think the hand can ache, the back can hurt, the heart can skip a beat, but if you ignore it, then there's really no point in getting that information. And so it needs to be centralized with some sort of a translation layer or a group of org, a group of folks who have a combination of data engineering background and strategic thinking to pull that information together, identify what's important. And then in order of priority, triage what needs to be addressed, and address it, and share it upward with leadership to make sure that they're continuing to stay effective despite that initial decentralization.

DD: I think the devil's advocate position might be, if I may for a moment...

PM: Please.

DD: …that at a certain size, one of the objections will inevitably be that it slows down, it becomes bureaucratic in very large organizations, say 10,000, 50,000 and so forth. How do you implement the model, the biological model, but still retain some air of agility with the data strategy?

PM: The best approach that I've had in recent experience, and it always comes out the same result. But to give an example, what ends up happening is that these companies when they have these concerns, they are sound, but I would actually argue that it doesn't slow down, it actually speeds up. I think the question is, will it slow us down because there's an initial cultural shift that may have to happen or an initial lift? Yes, of course. But if you bring in a consultant for any reason, you're going to get the same effect and it's going to be an external partner. So if you're bringing in a team and building it internally, you're going to actually have something that's living in the long run for this, you're going to have a group of folks who are intimately involved with the company, and it's going to be a short term cultural shift, just the same as anything else that you might be working on.

And when I say bring in a consultant, I don't mean for this type of work, I mean for anything. There's always going to be some cultural shift when trying to bring in an external opinion. And so when you're trying to incorporate that opinion, whether it be through a consulting engagement or in person on the ground, there's going to be that shift one way or the other, but it's going to be saving you a lot of time. And the way I see this type of work is I think a lot of leadership looks at things from a profitability perspective, right? They think what is this going to get us as a net new, a net value add, right? But I think a lot of the time, and this is coming from my background in economics, that's called a financial profit. But there's also something called an economic profit, which means the avoidance of a loss instead of the acquisition of a gain, which comes out to the exact same value.

If I paid you 10,000 quid or I saved you 10,000 quid, it's the same result. And so a lot of what this model does is that it says, hey, like I mentioned before, let's say you have five people each month tracking down a report. Each of those FTEs have an average salary of about 10,000, and maybe they take about 10 hours a week to work on this. Well, then that's essentially 40,000 a week times four weeks, and all of a sudden every quarter you're spending 160,000 per vertical just to get this report out. And so over years and years, that adds up to quite a bit, that adds up to tens of millions. And so investing in a team that ultimately team and software costs less than a million dollars, maybe even less than 600,000 or 500,000, you're able to save tens of millions in the long run and get a lift in your performance.

tize their files in the early:

DD: And what are the specific challenges or obstacles that large enterprises might face when they're transitioning to the biological model based data strategy?

PM: I would say the big two are absolutely cultural. So what we talked about already with just developing those partnerships over time, which do come and always come, and they always come with ease, but they come with patience. And so sometimes it can take three, four, or even six months to get everyone fully on board. And that's okay if you're investing for a long-term infrastructure. But if it's a smaller group, it can take a month or less than a month depending on the size of your organization to get people on board and what their needs are. The goal is to frame how the data is serving them and the benefits that they have from this partnership. So it's less of a give and take and more of a, we're working together on strategy and we're going to help bring the work that you're doing to light across the entire organization and help everybody be more effective in how they think.

The second is definitely technological, and so that is primarily from a data collection standpoint. For reporting, it's as simple as cell service dashboard with Tableau, Power BI, Data Studio, whatever you're working with. And from an AI activation standpoint, a lot of that is done by localized teams. So that work doesn't really need to be worried about much like I mentioned the biological model, your eyes blank without you realizing it, your hand might twitch without you realizing it. If you burn your hand on the stove, it'll pull back. That's the AI automated component, it's just going to be functioning best at the local level. But for the folks who are, I think trying to curate a lot of this data and pull it into one place, that can take time. But that's a whole reason that this need is there, right?

Similar to bringing a VP of growth and saying, we're going to have to take some time to develop relationships with new clients to build up our revenue stream. It's still going to take time, you're still not going to see an immediate payoff, but over time, it's going to snowball. So it's similar to this where instead of head of growth, you're bringing in basically ahead of data transparency, ahead of efficacy, ahead of performance measurement and reporting or business analytics, whatever you want to call it. At the end of the day, it's marrying the data from the backend and the conversations and needs of each team, and melding them into a consistent narrative for leadership to save everybody a lot of time in understanding what's going on and who needs to do what. So it comes out pretty well, but that technical build can take a little bit of time.

PD: So Patrick, we talked a lot about how data flows to an organization, how it's curated, how it's turned into insights and actions in an organization. Data governance obviously plays a very important role in that process. How is the data governance set up in their biological model that you are talking about, and how is it different from traditional data governance systems?

PM: Primarily, it's different in two ways. One, and I know we touched on this already, but it's not disjointed. So the data's all being collected and pushed into one central location through a single source of truth. But secondly, to deliver that single source of truth, there's alignment. And so usually in most organizations, the way a KPI is calculated is usually quite subjective, right? A lot of the time, let's say senior leadership says, we want to understand what our return on CapEx over the past six months has been, and I want each team to tell me. Well, each team may have different rules for what the capital expenditure is.

I mean, I don't know any folks from accounting are going group to group and making sure that they qualify it as such. Each team will kind of be a little subjective here and there. And the way that they measure their returns to measure, to weigh against that CapEx may be different. Maybe some folks are only looking at sales, maybe other folks are looking at, maybe they're saying net sales and reducing any sort of churn that came about over that period. Who knows? And so there's no standard in place and that's where inconsistency and inaccuracy reporting happens.

ons why a lot of companies in:

And it's these types of conversations that I think are lacking in a lot of organizations. And what the biological model assists with is it does centralize that information. But secondly, through that governance model, within the biological model, the way we govern the collection of information within it is also to get alignment. So is pain the same as pain, is heat the same as heat on each part of the body, or are we mistaking maybe slap on the hand for something good and a burn on the hand for something bad and suddenly we have two things that are bad, but one of them is being registered as good. And so we really need to get alignment on that.

So the government's infrastructure is basically making sure that everyone is speaking the same performance story and making the same assumptions when they report, and not leaving it 100% to just subjectivity. Basically, the process would lead them coming to the reporting team and just explaining their logic for the KPI. And then the team, the reporting team usually just wrapping that into the narrative and very rarely will they tweak it, but if they absolutely need to, they'll just say there's been conversations to try to do this. Can we reach a middle ground that might make everyone happy in how we report this? But it's basically those two things. And then the rest of that, when you combine it with that curated layer model where we're not disrupting data engineering, we're not disrupting data science teams.

So that way the automation and the sourcing of the information remains localized to each limb in each part of the body and just gets funneled up from just purely reporting so the brain can make decisions. And then lastly is that data dictionary. And that end-to-end process where we say, okay we have all of our knowledge for all of the most important metrics in one place, and we have a very consistent process for fixing things and getting ahead of things of say there are any technological or data related adjustments that need to be made to make sure that the teams that receive the data and the teams that produce and curate the data are all working properly without any sort of surprises on either end, which can happen in large organizations, particularly when these teams are decent centralized.

So all of that together creates a very strong infrastructure because you have transparency, you have trust, you have centralization, and most importantly it becomes much easier once those small shifts occur. Most of this work is already happening, it's just talking about it differently and maybe creating a few very simple tools to house it all in one place. And then from there it's all just conversation.

PD: So there would obviously be changes in data policy for an organization a little bit also changes in the way data flows having this kind of basin of insight and truth at the end. How about roles and responsibilities, would the biological model require changes to those or would there even be roles and responsibilities that are new in that model?

PM: Yeah. Again, the whole, and the beautiful thing about the model is there's small changes but nothing dramatic. The only large change is the way the organization thinks about is performance and cleaning up that thinking. So the small changes would be, first of all, to my point, the creation of a role or function at adjacent to C-suite, probably equivalent to just in traditional business terms, equivalent to a VP or sort of division vertical type of position with a couple of direct reports who are able to work horizontally, the only horizontal team in an org built on verticals. And just making sure that those conversations are happening and that these standards are being to provide those standards and guide. In terms of existing roles within the organization, the roles of VPs, let's say VPs or the folks running these verticals for sake of argumentation.

So the roles of VPs and then the folks who on those VPs teams, usually analysts or managers who are responsible for reporting the KPIs, they will have slight shifts in their roles, but very minor. When I say very minor, I mean two or three hours a week will be changed out to do something different, not added to. Typically, with the VPs, it's just going to be inviting that horizontal team to come in and sit on maybe their WBR or any group team calls that they have just to give visibility and be a fly on a wall and understand the context of the work and how things are pacing. And also, to be sitting in on those business reviews to report up with the C-suite.

So to make it a personalized discussion with each person in that discussion rather than one-on-one, one-on-one, one-on-one, VP-CEO, VP-CEO, VP-CEO, VP-COO, just making it everyone in the room, they can still have their one-offs, but you have these recurring calls with the entire health of the org and the folks responsible for each of those verticals are on those calls, and the information is clear presented, no doubt and it's accuracy. And then like I mentioned that other role where the analysts on their team would be speaking to those. That's really the only other change is the analysts would be on some of those calls, probably more of the lower-level ones. And they would just basically have a responsibility for contextualizing anything happening with the KPI. And obviously that's their responsibility anyway.

So maybe you have a manager of customer success and maybe they have a suite of three or four KPIs that they're responsible for and maybe two of them are reported upward to C-suite. So they'll just have a responsibility for either providing an explanation to their VP who can share it or sitting on those calls, maybe not with C-Suite, but with the VP level of all the vertical heads and giving context. And most importantly, the only real thing that's going to change in that role, 'cause they're already giving that context, is that they're not just giving it to their manager's manager anymore, they're giving it laterally to the heads of other groups that will also be relying on that information if it's pertinent. So it's not really an extra lift in time, it's just pivoting the way we think about sharing that information and sharing it in a broader, more contextualized forum.

DD: So Patrick, as we conclude today's show, do you have any final pieces of advice about the biological model and how organizations can implement or where they can find out more about the model?

::

PM: In terms of where to implement, just listen to what you can't hear. When you're thinking about how well your company is doing, I want you to ask yourself, what don't I know, and what do I really want to know, and what do I know but I don't trust? Because that's what this model is set to do. It's set to call into question what's actually being told at a high level after it's passed through essentially the game of telephone and passed through six or seven hands before it gets to your desk. And secondly, it calls into question what you're not able to get, but you may actually be able to get and you don't know it.

So real implementing the model is a low-cost high value solution, and it takes all of a couple of months to implement with the right type of vision in place. It's a very exciting role and I'm looking forward to seeing more and more companies adopted over time. And in terms of where to learn more about it, it's a model that I've actually developed myself. So you can always reach out to myself on LinkedIn, you can read some of the articles that I'm producing out in the space, and also you can find me through Northeastern University and Boston University where I teach as well. So always happy to talk more about that.

DD: Great. And we'll put a link in the show notes for you, the listener, to access some of those articles. Okay. So we're nearly at the conclusion of today's episode, but before we leave you, we did want to make a quite important special announcement and talk about a new quarterly magazine that we have published that's now on an issue two. Both Patrick and Philipp are contributors and have produced excellent pieces for the magazine. The magazine is called "The Data Scientist" and it features many of the words leading data scientists and organizations implementing data science and AI. The articles are wide ranging, spanning machine learning, data engineering and platforms.

We have industry case studies from some of the world's largest companies. There are articles on data leadership and management of teams and for the data scientists themselves careers advice. So it's packed full of insight and no adverts apart from there are two or three adverts, one for this show of course, we had to do that. And also, for the main show sponsor Data Science Talent, obviously my recruitment company, who are also the sponsor for this show. But other than that, it is packed full of deep content rich articles. And Patrick's article is due in issue three out in May 23. Philipp's article is available now in issue two. And Phillips article has an incredible amount of value in it because it tackles one of the biggest issues data science leaders face in being successful, which is how to get their data science MVPs into production and scaled effectively in large enterprises.

And as well as being an article, it contains a superb process map which is effectively your complete cheat sheet for the industrialization of data science and big companies. So if you subscribe to the magazine, which is free, then we'll even send you a free PDF of that process map. So you can subscribe for the magazine and all future issues @datasciencetalent.co.uk/media. We will of course put that in the show notes. Patrick, this has been an absolute pleasure, lots of insight into the biological model and the potential positive impact it can have on large companies. So thank you so much for talking to us. I'm sure we'll be having you back to talk in more depth about data governance. We wish you the best. Thank you.

PM: Thank you gentlemen.

DD: And thank you also to my co-host Philipp Diesinger and of course to you for listening. Do check out our other episodes @datascienceconversations.com. And we look forward to having you back on the next show [outroductory music].

Links

Chapters

Video

More from YouTube