Artwork for podcast Data Mesh Radio
#174 Measuring the Impact and Value of Your Data Products in Data Mesh - Interview w/ Pink Xu
Episode 17429th December 2022 • Data Mesh Radio • Data as a Product Podcast Network
00:00:00 01:13:09

Share Episode

Shownotes

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/

Please Rate and Review us on your podcast app of choice!

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here. You can download their Data Mesh for Dummies e-book (info gated) here.

Pink's LinkedIn: https://www.linkedin.com/in/pink-xu/

In this episode, Scott interviewed Pink Xu, Change Manager of Business Impact of Data Products at Vista.


Before we jump in, there are a few specific examples in this to Vista but I think it is incredibly relevant when looking at measuring the impact of your data work. As Pink says, set the objective/goal for the data product and then measure if it met that objective/goal. It isn't the impact framework's job to specifically measure if the objective of the data product is valuable, only to provide an objective way to measure how well did the data product meet its goal.


Some key takeaways/thoughts from Pink's point of view:

  1. Look to standardize the way you measure impact for data products. Much like data observability/SLA metrics, a centralized team shouldn't be the ones focused on measuring or defining the target impact of a data product, only providing the way to measure it.
  2. Again like data observability, an impact measurement framework/methodology means people can trust exactly how impact was measured without having to dig into every measurement decision. It's not like grading your own essay, which is a problem with a not impartial measurement.
  3. Impact measurement can only go so far. It shouldn't be the only consideration in valuing a data product but without a fair, impartial framework, measuring the value of work becomes all the more difficult.
  4. A data product "enables business impact", it cannot create the impact itself if no one uses it. Think about who gets "credit" for the impact - is it the data product creator or the team that acted on the insights from the data product? Look to reward/credit both parties if possible to generate a better collaborative culture.
  5. Data products don't inherently have value - they only have value if used. So look to interview data product users - and potential users that aren't leveraging that data product - to help assess how valuable a data product really is.
  6. It's crucial to have comparability across data products regarding impact measurement to make it easy for people to understand what certain terminology and measurements mean. A general methodology framework for measuring means people can better understand and compare.
  7. It's easy to only want to focus on direct financial impact of data products but this ignores value to the organization that isn't only increased sales or cost savings. If you don't focus on other factors, you will only create data products to directly drive sales or cost savings instead of helping make better long-term - or even short-term - data-informed decisions.
  8. When considering creating a new data product, think about the impact you expect it to have. What is the objective for creating the data product? If you are successful with your data product, what does success mean and why would that justify the investment?
  9. An impact measure that isn't directly financial is a decision influencer model. Does the data product drive key business decisions and is it a key contributor to those decisions or just one of many aspects used? It's more qualitative but still valuable/useful.
  10. Another impact measure might be customer satisfaction / impact on lifetime customer value. This is a much more nebulous impact to measure, especially if customers have a long lifetime. But it doesn't make it not valuable.
  11. Impact measurement is notoriously difficult - just ask anyone in marketing about their attribution model on in the non-profit space. Focus on creating an impact measurement framework and working with teams so they can leverage the framework; but the central team doing the measurement work itself can create a bottleneck and they will often lack the business context.
  12. !Useful Outcome Insight!: More data products' objectives at Vista are business decision impact rather than direct financial impact.
  13. They have not yet reached the point where impact measurement is used for project investment decisions - that is a harder problem and okay to tackle down the line as you learn to measure impact first. They are in the descriptive analytics phase for the impact framework, not predictive or prescriptive. Basically, that is a running problem, it's okay to not take it on when learning to crawl and walk.
  14. Impact measurement is not about a yay/nay decision or a good/bad score for a data product; the impact measurement framework is only part of the value measurement of a data product.
  15. Don't only focus on short-term impact data products - often the immediate financial impact. That will lead to small wins instead of making better long-term decisions. Focus on measuring both. Your impact framework should align to your business strategy, not only your next quarter's income statement.
  16. At Vista, the goal is to get to a place where they can reasonably compare financial impact, business decision impact, and customer value impact against each other. But it's still at best apples and oranges and they are still early days.
  17. Surviving versus health: when you are focused on 'surviving', impact measurement is a champagne problem. But impact measurement is important to your long term 'health' in your data ecosystem to make better decisions on data work.
  18. Trying to measure data product usage in a vacuum is a bad methodology. You should instead look at what would be the expected usage if it were successful, if users leveraged it a lot. Otherwise, something that drives key decisions but rarely touched will be valued much less than something people touch quite often but they don't really leverage to make decisions.
  19. Much like any aspect of data mesh, it's crucial to communicate with users as you build an impact framework and methodology. What are their concerns? How can you make it easy for them to adopt? It's easy to lose sight that an impact measurement methodology is also change management which requires strong communication.
  20. When building a data product, think about who would want to consume information about its impact. Then make sure you leverage the parts of your impact measurement framework and align on how the data product will be measured. Figuring out how to measure after the fact creates friction and can lower trust.



Pink started the conversation on what her focus is: setting the framework, the methodology for measuring the impact of data products. At Vista, they wanted to create a way of measuring the impact of data products that was as standardized as possible so people looking at impact don't have to learn a new way of measurement for each incremental data product. With data observability in data mesh, most organizations are focused on centrally defining and describing data quality/trust metrics instead of centrally doing the measuring or setting the SLAs - that is on the data product owners. That way, there is comparability across data products, which means less work for someone to understand the quality of the data. Vista is taking the same approach with impact measurement, making it easier on data producers to measure their own impact and report on it and making it FAR easier on people consuming that information to compare across data products instead of customized metrics for each data product.


According to Pink, the most common impact people want to look at for a data product is direct financial impact. And that is completely understandable and what she initially thought she'd focus the most on. But as Zhamak and others have also mentioned, this can lead to only creating direct bottom-line impact data products, things that directly drive more sales and/or cost savings. Often that is quick, small scale, not interconnected wins. That means missing the big picture of what is happening relative to your organization to set long-term strategic direction. If there is something that helps to reduce time to hire for HR or leads to better retention, that is a much more difficult financial impact or business impact to measure. Then, think about the decision to enter a new market, the go/no go decision - is that really able to be measured in financial impact terms versus smart business decision terms? So at Vista, they are also looking at more qualitative measures like what impact does a data product have on making key business decisions - is it the core driver to making a key decision or is it one of 100 aspects of a decision? And then what about impact on customer experience?


So, Pink is spearheading developing a framework that includes what type of impact people want to measure. That obviously can be direct financial impact - be it sales, margins, etc. - but also a number of other impacts as mentioned above. The key to the framework is making it so people can easily measure the impact and then communicate it in a language others - mostly execs - understand. But the central data team - notably Pink here - does NOT tell data product owners what impact a data product should have. Nor do the central team do the measurement. The central team's role is to make sure people are measuring appropriately within the framework and help data product creators adopt the measurement framework.


As an example at Vista, Pink talked about a data product measuring the time between when a customer starts using a certain template to completing their order as well as the order abandon rate. If you look at how quickly someone is actually able to do the work they want and get to a good final quality, you can recommend better templates to users or even decide what types of templates you might want to create more of. And you can measure reorder rates as well. So the impact measurement is focused more on the customer satisfaction with a long-term, long-tail financial impact. Much harder to measure than the impact of a pricing change.


A really interesting insight is that even after a year, Pink still believes they are pretty early in their impact measurement journey. They are not using impact measurement yet to drive decisions on creating new data products versus measuring what has already been created. Essentially the impact measurement is in the "descriptive" phase of analytics and they have not yet moved to predictive or prescriptive. That doesn't mean the data products themselves are only descriptive but the measurement is currently at that stage. Scott note: this is VERY unsurprising to be at descriptive stage. Talk to anyone in non-profit about impact measurement. It's incredibly difficult. Pink and team are pretty far ahead of the general curve here; as an industry, we are just learning how to effectively measure data work impact/value in general.


Pink made a great point circling back on financial impact measurement. If you are focused only on measuring immediate value, you won't build data products for better long-term decision making. But you can't only focus on the promise of what impact a data product might have in the future. So you want to make sure any impact measurement framework considers all aspects of value and aligns with a long-term vision but can point to value creation in the short term too. As part of this, they are trying to work towards a way to equate financial impact, business decision impact, and customer value impact to make it more comparable between the three. But that takes a lot of data to measure long-term versus short term outcomes :)


According to Pink, a few missteps that many might make relative to impact measurement, some of which are covered above but important to reiterate:

1) creating a data product without a specific objective you can measure against. If the data product is successful, what does success look like and would that success justify the investment?

2) focusing too much on direct, measurable financial impact when deciding which data products to create/fund.

3) focusing on short-term or long-term impact only.

4) not interviewing data product users - and potential users - to assess their value.

5) creating a framework that doesn't make impact measurement comparable - at least in some form - across data products. It's really only qualitative at that point even if it involves numbers.

6) creating a framework focused on valuing data product usage in a vacuum. They are essentially vanity metrics - how much usage does the data product get compared to expectations and how much usage does it get relative to what a successful data product for that purpose/objective would get?

7) forgetting to involve users in the framework process creation - what are their concerns? If they were concerned it would value business decision impact well below financial impact, do you think they'd want to use it?

8) skipping communication about what people consuming a measurement would want to know. If the CEO wants to know XYZ metric from a data product, you want to build that measurement in upfront.

9) not helping data product owners adopt the impact measurement framework. They need to be able to understand it to want to use it, especially if it is measuring essentially how well their data product is driving value.


Quick tidbits:

When considering creating a new data product, think about the impact you expect it to have. What is the objective for creating the data product? If you are successful with your data product, what does success mean and why would that justify the investment?


A single framework means everyone is looking at the same things and makes it easy to understand. Are they reporting increased sales, contribution margin, gross margin, etc.? It also means people can trust how it was measured. Scott note: as a former financial analyst, this is incredibly important…


Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/

If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

Links

Chapters

Video

More from YouTube