Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/
Please Rate and Review us on your podcast app of choice!
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
Episode list and links to all available episode transcripts here.
Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.
Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here. You can download their Data Mesh for Dummies e-book (info gated) here.
Pink's LinkedIn: https://www.linkedin.com/in/pink-xu/
In this episode, Scott interviewed Pink Xu, Change Manager of Business Impact of Data Products at Vista.
Before we jump in, there are a few specific examples in this to Vista but I think it is incredibly relevant when looking at measuring the impact of your data work. As Pink says, set the objective/goal for the data product and then measure if it met that objective/goal. It isn't the impact framework's job to specifically measure if the objective of the data product is valuable, only to provide an objective way to measure how well did the data product meet its goal.
Some key takeaways/thoughts from Pink's point of view:
Pink started the conversation on what her focus is: setting the framework, the methodology for measuring the impact of data products. At Vista, they wanted to create a way of measuring the impact of data products that was as standardized as possible so people looking at impact don't have to learn a new way of measurement for each incremental data product. With data observability in data mesh, most organizations are focused on centrally defining and describing data quality/trust metrics instead of centrally doing the measuring or setting the SLAs - that is on the data product owners. That way, there is comparability across data products, which means less work for someone to understand the quality of the data. Vista is taking the same approach with impact measurement, making it easier on data producers to measure their own impact and report on it and making it FAR easier on people consuming that information to compare across data products instead of customized metrics for each data product.
According to Pink, the most common impact people want to look at for a data product is direct financial impact. And that is completely understandable and what she initially thought she'd focus the most on. But as Zhamak and others have also mentioned, this can lead to only creating direct bottom-line impact data products, things that directly drive more sales and/or cost savings. Often that is quick, small scale, not interconnected wins. That means missing the big picture of what is happening relative to your organization to set long-term strategic direction. If there is something that helps to reduce time to hire for HR or leads to better retention, that is a much more difficult financial impact or business impact to measure. Then, think about the decision to enter a new market, the go/no go decision - is that really able to be measured in financial impact terms versus smart business decision terms? So at Vista, they are also looking at more qualitative measures like what impact does a data product have on making key business decisions - is it the core driver to making a key decision or is it one of 100 aspects of a decision? And then what about impact on customer experience?
So, Pink is spearheading developing a framework that includes what type of impact people want to measure. That obviously can be direct financial impact - be it sales, margins, etc. - but also a number of other impacts as mentioned above. The key to the framework is making it so people can easily measure the impact and then communicate it in a language others - mostly execs - understand. But the central data team - notably Pink here - does NOT tell data product owners what impact a data product should have. Nor do the central team do the measurement. The central team's role is to make sure people are measuring appropriately within the framework and help data product creators adopt the measurement framework.
As an example at Vista, Pink talked about a data product measuring the time between when a customer starts using a certain template to completing their order as well as the order abandon rate. If you look at how quickly someone is actually able to do the work they want and get to a good final quality, you can recommend better templates to users or even decide what types of templates you might want to create more of. And you can measure reorder rates as well. So the impact measurement is focused more on the customer satisfaction with a long-term, long-tail financial impact. Much harder to measure than the impact of a pricing change.
A really interesting insight is that even after a year, Pink still believes they are pretty early in their impact measurement journey. They are not using impact measurement yet to drive decisions on creating new data products versus measuring what has already been created. Essentially the impact measurement is in the "descriptive" phase of analytics and they have not yet moved to predictive or prescriptive. That doesn't mean the data products themselves are only descriptive but the measurement is currently at that stage. Scott note: this is VERY unsurprising to be at descriptive stage. Talk to anyone in non-profit about impact measurement. It's incredibly difficult. Pink and team are pretty far ahead of the general curve here; as an industry, we are just learning how to effectively measure data work impact/value in general.
Pink made a great point circling back on financial impact measurement. If you are focused only on measuring immediate value, you won't build data products for better long-term decision making. But you can't only focus on the promise of what impact a data product might have in the future. So you want to make sure any impact measurement framework considers all aspects of value and aligns with a long-term vision but can point to value creation in the short term too. As part of this, they are trying to work towards a way to equate financial impact, business decision impact, and customer value impact to make it more comparable between the three. But that takes a lot of data to measure long-term versus short term outcomes :)
According to Pink, a few missteps that many might make relative to impact measurement, some of which are covered above but important to reiterate:
1) creating a data product without a specific objective you can measure against. If the data product is successful, what does success look like and would that success justify the investment?
2) focusing too much on direct, measurable financial impact when deciding which data products to create/fund.
3) focusing on short-term or long-term impact only.
4) not interviewing data product users - and potential users - to assess their value.
5) creating a framework that doesn't make impact measurement comparable - at least in some form - across data products. It's really only qualitative at that point even if it involves numbers.
6) creating a framework focused on valuing data product usage in a vacuum. They are essentially vanity metrics - how much usage does the data product get compared to expectations and how much usage does it get relative to what a successful data product for that purpose/objective would get?
7) forgetting to involve users in the framework process creation - what are their concerns? If they were concerned it would value business decision impact well below financial impact, do you think they'd want to use it?
8) skipping communication about what people consuming a measurement would want to know. If the CEO wants to know XYZ metric from a data product, you want to build that measurement in upfront.
9) not helping data product owners adopt the impact measurement framework. They need to be able to understand it to want to use it, especially if it is measuring essentially how well their data product is driving value.
Quick tidbits:
When considering creating a new data product, think about the impact you expect it to have. What is the objective for creating the data product? If you are successful with your data product, what does success mean and why would that justify the investment?
A single framework means everyone is looking at the same things and makes it easy to understand. Are they reporting increased sales, contribution margin, gross margin, etc.? It also means people can trust how it was measured. Scott note: as a former financial analyst, this is incredibly important…
Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/
If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf