Artwork for podcast Data Mesh Radio
#148 It's A-Okay to Solve for Today: ANZ Plus's Early Data Mesh Success - Interview w/ Adelle McDonald
Episode 14830th October 2022 • Data Mesh Radio • Data as a Product Podcast Network
00:00:00 01:19:09

Share Episode

Shownotes

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/

Please Rate and Review us on your podcast app of choice!

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here. You can download their Data Mesh for Dummies e-book (info gated) here.

Adelle's LinkedIn: https://www.linkedin.com/in/adelle-mcdonald-79a9a2139/

In this episode, Scott interviewed Adelle McDonald, Customer and Origination Lead at ANZ Plus, a bank in Australia and New Zealand.

Some key takeaways/thoughts from Adelle's point of view:

  1. To drive buy-in, the 1:1 conversations with domain owners - the business leaders - you will have to tailor your conversations to each person. Listen to their pain and reflect it back to them.
  2. Focus on an ability to quickly pivot with low cost. That can mean things aren't as product-worthy to start but it means you can evolve towards value more quickly.
  3. Addressing domain owners' pain points gets them looking at you as a partner. They will be much more willing to work with you, especially as you partner to provide actionable insights.
  4. ?Controversial?: ANZ Plus is embedding data leads into domains to handle the data quanta for the domain and also build the team what they need from data. As part of that, they are slowly building up the domains' capabilities to handle their own data. This minimizes friction and creates buy-in but is likely not long-term sustainable - ownership will need to be transferred.
  5. Very important to tie the data quanta to use cases - driving value for users means focusing on use cases.
  6. Developers or software engineers owning data is complicated. Make it so they can start to make small changes and learn in a safe way instead of dumping all ownership on them at once. Ownership and knowledge aren't a switch you flip.
  7. Using a git-based, pull request approach, developers can attempt data work without manual stitching so they learn to do the work themselves; but it can still be easily overseen by someone with more data expertise.
  8. One way to potentially drive executive buy-in is joint/collaborative KPIs. So it's not just about their domain's results but how well they work and drive results with another domain.
  9. ?Controversial?: It's okay to have a data asset with murky long-term ownership at first. If usage picks up, you want to convert it to a proper data quantum but we need to be able to test the waters with things and see if they are actually useful first. Clarity comes with usage.
  10. When creating anything data related, use a software development lifecycle (SDLC) approach. Domains may create something exclusively internal to the domain but once you look to share externally, you have rules and standards and best practices. Move from the pipeline approach to the software approach to data.
  11. Automatically generated documentation can considerably help with governance. You have it in the same repo and it can handle a large part of explaining what is happening with the data to make other decisions easier.
  12. Automate your governance checklist as much as possible so you prevent the manual work of governance. No gates, simple checks, that's a winning, low friction way to govern.
  13. When you don't automate your governance checklist, domains often feel they need to invent or buy the tooling to comply with governance. Making it just no friction to check through automation means far less complexity and fewer issues.

Adelle started the conversation on driving buy-in and how important it is to tailor your message. As prior guests have also noted, the easiest way to drive buy-in is by helping out the person you are trying to get bought in as part of the process. So find their needs and help drive to a positive outcome for them with data first.


When it comes to getting buy-in from domain owners, Adelle has seen finding their pain points and finding good ways to address those pain points will get them to see you as partners to leveraging their data. They will be much more willing to work with you rather than you simply putting new responsibilities on their plate. You can work with them to ensure their information - especially purchased data - is providing value and gives people actionable insights, not just interesting insights. It may be a tough pill to swallow but you need them to see you as that partner in the long run.


At ANZ Plus, they are embedding data leads into the domains to be the main point of data contact to external domains, according to Adelle. Those data leads are serving the domains by helping to really address their internal business needs with data while also creating the data sharing mechanisms - the actual data quanta in data mesh terms - for sharing that domain's data across the rest of the organization. With this work falling on the data leads, ANZ Plus is not generally asking for domains to take on too much responsibility relative to data - at least not yet. This minimizes the work the domains have to take on but still significantly accelerates the time to getting to business value within the domain for new data use cases. This means the domain owners have been very happy to work with the data leads because there isn't much incremental work they are responsible for - at least at first.


Adelle and the data mesh fans/leaders at ANZ Plus are aware that their data ownership model is probably not the right fit in the long run. But it's working well for them right now and that's what matters to them. They have found a setup that doesn't add a ton of overhead process and will evolve as the capabilities and resources to hand over actual data ownership get built out more and more. When they evolve, they are focused on maintaining the ways of sharing context rather than trying to keep the exact data quanta as is. But if your organization isn't very clear that things will constantly evolve, this could be a very hard setup to maintain in the long run.


As part of their plans for evolution, Adelle mentioned that they are focusing on maintaining the ability - with a low overall cost - to pivot. As the world around them changes and evolves and as they learn more by taking actions on the actionable insights they generate, to drive more value, the team needs to make sure they are evolving along with the world and their markets. They are focusing on getting to the right insights as fast as possible in sustainable ways.


Thus far, Adelle and team are finding it's very important to tie use cases to data quanta. To drive value for customers, they need to focus on the use cases. This is especially relevant when looking at customer journeys - you want to set yourself up to collect the right data to analyze to understand what's going on with that customer journey.


Adelle emphasized the need to create an environment for safe evolution by developers relative to data. At first, developers won't know how to deal with data in general as an entire concept but most will understand at least some aspects of working with data. So how can you get them more and more used to dealing with data and learning more? By providing them a way to make changes when they need in a safe and easy way. Easier said than done but teaching developers and software engineers to deal with data isn't a switch you flip, just like handing over ownership. It's best done as a gradual process. Easier said than done of course.


According to Adelle, it's okay to have somewhat murky data ownership at first for a new potential data asset. If it starts to get broader use, you need to lock-in who will own it, how, and why but you don't need to get ahead of yourself and drive towards a perfect data quantum each time you look to share data. Have high context exchange with other users, let them know how much they can trust things, but also be in a mode of trying it out and seeing where it might go before investing the time into data quantum creation. This gives domains more freedom to play with their data but data consumers also must be flexible around what they are consuming might evolve as it gets molded into something more scalable, usable, trustable, etc. Without an ability to evolve quickly, this model will likely not work.


It's crucial to think about data like any other software development for Adelle. Your software development lifecycle (SDLC) needs to have things like governance and API interfaces as part of the development. And, as an example on the operational side, domains can build small internal apps that aren't good for external domains as they figure them out and test. But once other domains need access, then you have to start treating your sharing access like a product. The same goes for data. You should have standard practices internally to make this low friction.


Automated documentation has been a big win for governance according to Adelle. While you still need additional documentation, if you can have the base level documentation auto-generated in the repo with your data quantum code, that's very helpful for sharing what you are doing with data and why. It also means it's easier to make other governance decisions because people can see what is happening, what is the information about. They've automated much of their governance checklist as part of their software development lifecycle as well so people can test against if they meet governance requirements as they are developing, not as a gate at the end.


A few tidbits:

Only collect data for a specific reason. If you don't know specifically why you are collecting it, why are you collecting it?


Making incremental data work requests a pull-based system (think pull requests in git), you can work somewhat asynchronously and developers can learn to attempt data work in a safe environment.


Before investing in doing data work, ask if it is recurring and what is the value, why are we doing this. If it will be recurring work, look to automate as much as possible first. It might not yet be data quantum worthy but it sets you up for when that time comes.


One way to potentially drive executive buy-in is joint/collaborative KPIs. So it's not just about their domain's results but how well they work and drive results with another domain.


When you don't automate your governance checklist, domains often feel they need to invent or buy the tooling to comply. Making it just no friction to check against governance requirements, as part of the SDLC, means far less complexity and fewer issues.



Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/

If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

Links

Chapters

Video

More from YouTube