Artwork for podcast Data Mesh Radio
#260 Driving the Big Picture Forward - More on Northern Trust's Data Mesh Implementation - Interview w/ Jimmy Kozlow
Episode 26015th October 2023 • Data Mesh Radio • Data as a Product Podcast Network
00:00:00 01:13:55

Share Episode

Shownotes

Please Rate and Review us on your podcast app of choice!

Get involved with Data Mesh Understanding's free community roundtables and introductions: https://landing.datameshunderstanding.com/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here. You can download their Data Mesh for Dummies e-book (info gated) here.

Jimmy's LinkedIn: https://www.linkedin.com/in/jimmy-kozlow-02863513/

In this episode, Scott interviewed Jimmy Kozlow, Data Mesh Enablement Lead at Northern Trust. To be clear, he was only representing his own views on the episode.

Also, FYI, there were some technical difficulties in this episode where the recording kept shutting down and had to be restarted. So thanks to Jimmy for sticking through and hopefully it isn't too noticeable that Scott had to ask questions without hearing the full answer to the previous question.

There is a lot of philosophical discussion in this conversation but tied to very deep implementation experience. It is hard to sum up in full without writing a small novel. Basically, this is one to probably listen to over just reading the notes.


Also, Scott came up with a terrible new phrase, asking people to "get out there and be funky."


Some key takeaways/thoughts from Jimmy's point of view:

  1. Start your mesh implementation with your innovators. Find the people who are excited to try out something new. You want to spend your early time on innovating and learning, not constantly driving buy-in. Find good initial partners!
  2. It's okay to start a bit simplistic - with each domain and on your general implementation - because you better believe there are complexities coming as you scale. This is not going to be simple. But the ability to tackle that complexity effectively is what differentiates data mesh so that unavoidable complexity ends up being where you find lots of incremental value.
  3. Data product complexity is often from maintaining interoperability across the data sets within the data product - not just to other data products. You want to build out a data product with more data sets if you can because that means there is much more information a consumer can easily leverage that is highly related instead of stitching things together across many data products.
  4. ?Controversial?: There is value in inorganic data product growth. A central team can see where there is a gap in the available data on the mesh. Even when there isn't a use case calling for a specific data product, filling in those important information gaps can create an environment where more use cases will emerge quickly. Scott note: I go back and forth on this a lot. Data for the sake of data is bad but if you identify gaps, it can accelerate a lot of new use cases.
  5. Financial services is somewhat unique in that many of the desired use cases are already known - lines of business have had their use cases often for decades so it's about serving them well in new ways rather than inventing new use cases whole cloth, at least at the start of a journey.
  6. ?Controversial?: Start from serving existing needs instead of trying to invent those new use cases. That imagination around new use cases will be important but there's probably a lot of things your business wishes they could already do to serve customers and that's where your low hanging fruit is. Go after those first to create adoption momentum.
  7. While data mesh might provide more initial friction to use cases, - you need to gather up more information early to embed the necessary regulatory/governance decisions - it means a better overall outcome. But getting people bought in that the initial friction is worth it can be hard.
  8. The most important metric - at least for Northern Trust - is shown value. That demonstration creates momentum through excitement and it pushes adoption as more teams want to get some of that value and they can also have the fear of missing out.
  9. What success looks like in data mesh will be different for every organization especially through the different phases of your mesh implementation. Scott note: this is so incredibly important. Consider what would make your implementation a success in each phase.
  10. It's important to understand the difference between creating the ability for teams to leverage data mesh and actual adoption. One is much more tech focused and is very important but value comes from people participating and using what you build. Just because you build it doesn't mean people will use it. Focus on ensuring adoption.
  11. It's crucial to constantly communicate why you are going this route with data mesh. Even if there is friction to getting something completed, it's "the right kind of friction." You are making your data work sustainable and scalable.
  12. There is a steep learning curve in getting to critical mass with data mesh. Each domain has a different capability level and trying to keep teams collaborating and communicating well can be a challenge. Don't assume good communication will happen, you have to enable it.
  13. ?Controversial?: Leverage your central team to help domains early in their mesh journey. Help from the central team is an easy way to inject speed to getting that initial data product shipped. And at the same time, they help bring that domain's capabilities up to a necessary level.
  14. When helping a domain, the central team's goal is to get them to good enough quickly and then give them the autonomy to do what's valuable. Things like internal sharing communities are then extremely helpful as domains can exchange deeper insights with each other and connect, creating potential collaborations.
  15. ?Controversial?: When first working with a domain, start simple. Don't go for an advanced use case or data product, build up their capabilities instead of trying to throw them in the deep end.
  16. There are so many complexities in data mesh. Constantly consider trading complexity versus value, whether that is speed, a deeper use case, etc. The complexities will still be there when teams are better able to deal with them, no need to try to tackle everything at the start.
  17. Take measurements - e.g. time to first data product production for a new domain - with a big grain of salt. If you are close to the domain, you can get a sense of how much friction is in your processes and platform but there are so many factors impacting time to launch an initial data product.
  18. There's a hard balance to strike between data modeled to fit the use case and data modeled to fit with the rest of the data that's available on the mesh. It's especially hard when the domain really doesn't understand how to model data well yet. Be prepared to help domains out to make sure they aren't just publishing a data silo in data product form.
  19. There are many places in your implementation where you want to reduce time and friction. Getting to a proper level of understanding isn't bad friction. Trying to rush people through their understanding of data and how their data plays into the organization is likely to bite you in the end. Same for trying to rush people through learning your new ways of working.
  20. While it might seem obvious, focus is such a crucial aspect to doing data mesh right. Helping people figure out what to focus on when and keeping the lines of communication open to figure out what's most important right now is TOUGH but valuable.
  21. Similarly, it's incredibly helpful to help people connect the dots, help them see what could be possible. If you have people focused on making sure others can understand the big picture of your implementation and the available data, they can contribute to that big picture so much more.
  22. An organization starting out highly centralized will have a very different journey to one that is starting out highly decentralized.
  23. ?Controversial?: Your central team is there to mostly "coordinate, facilitate, and align people to the policies and practices" plus drive adoption. Look to have a guiding hand, a light touch.
  24. ?Controversial?: Be very careful cutting corners in your implementation. There are many ways to hurt your scalability. Governance is especially a bad idea to cut in financial services.
  25. ?Controversial?: Ask yourself who, at the end of the day, is tasked with making sure your data mesh implementation continues to progress. Not the specific tasks like building a data product but the overall implementation. If there isn't someone focused heavily on that, you might want to reconsider your approach.



Jimmy's role is a rather unique one in that he is literally tasked with enabling data mesh to happen and make sure they are focusing on the right things. That means a myriad of different things but a lot of it is ensuring the communication and collaboration happens where it needs to while still focusing on the big picture. It might be like an American football coach that coaches the entire team but is still calling plays and making the minute decisions during the game too. It's a big set of tasks to take on.


At Northern Trust, they started their mesh implementation with the innovators according to Jimmy. That's a common pattern for all tech innovation, not just in data mesh - find the people enthusiastic to try something new so you don't have to spend half your time driving buy-in. There obviously also had to be a need where data mesh could work and would be a differentiator.


For Jimmy, one big complexity factor he is seeing is around data products. He believes that you really start to drive value of a data product higher the more relevant and interoperable data sets you can include in the data product - within reason of course. But as you add that fourth or fifth data set, it gets complex to maintain interoperability and consumability even at the data product level. But that complexity is where a big part of the value really lies in data mesh - that it pushes organizations to take on that complexity but in a scalable way.


There is a bit of a push/pull in data mesh for Jimmy: organic data product growth from new use cases versus the centralized team pushing for the value from more and more interoperable data - basically creating data products that fill the gaps between existing data products to enable additional use cases. New use cases emerge the more data products you have that are well crafted and that link data across domains; but the question becomes do you push for new data products that don't have a specific use case - inorganic growth - in order to create a tipping point where those use cases can quickly emerge? It would mean less work for consumers and faster time to market if the data is already available instead of working with the producing team. But will the data products be made well enough to serve use cases? Are people ready to go and discover pre-made data products instead of ones tailored to their needs? Should we only apply this to new data products - how would a central team spot when additional attributes are needed to take an emerging data product from only serving a use case to being more globally valuable? It's still a question in data mesh as to whether to pursue inorganic data product creation.


At the start of their journey - and even though they are two plus years in, it's still relatively early - Jimmy and team are focusing on existing, known use cases as they build out the existing available data and improve their capabilities. Capabilities not just to deliver new data products but how to deliver incremental data that fits well into their already existing set of available data on the mesh. It's about building out the entire picture instead of focusing too much at the micro level. Scott note: balancing that micro and macro level is hard - extremely hard? - but the earlier you get good at figuring out how to add value at the overall mesh level while serving use cases, the more value you will deliver with each incremental data product.


Jimmy talked about with data mesh, even though we can deliver scalable data products relatively quickly, there can be more initial friction for new use cases. E.g. teams have to go and collect the necessary information to do the governance well instead of trying to add the governance at the end. And some might be frustrated or not bought in that the upfront friction is worth it. So he's trying to show the value far exceeds that initial extra friction but people will of course resist new ways of working. Such is the nature of working with humans 😅


When asked what does success look like for them, Jimmy pointed to showing value. It's important to note that isn't merely delivering value but being able to show that value. As Jerry McGuire said, "SHOW ME THE MONEY!" Being able to show value generation helps to build momentum and adoption. If you are proving value then it's not nearly as difficult to get incremental investment. Teams want to participate and capture value too. Excitement builds. But it's also important to note that what success looks like will change - maybe not wholesale but at least in part - in different phases of your implementation.


For Jimmy, there are two important aspects of your implementation, essentially the setup and the knockdown. You have to set up your implementation for success by building out the platform and capabilities but getting teams to actually adopt is still crucial. Just because you built something amazing, you still have to work with people to understand the shift in mindset and approach to get them to buy-in and adopt. A great platform that no one is using isn't really a great platform…


Trying to keep momentum of the whole mesh implementation until you reach critical mass is very challenging. Jimmy talked about how trying to get teams with quite different capability and speed levels to work together can be hard. It's not as though any organization is built to all move together as one - it's not a car that's built to move as one unit, it's more like a group of cars - so you need to really focus on the coordination, collaboration, and especially communication. ABC - Always Be Communicating 😎


Jimmy believes that the central data team in data mesh should be a key point of leverage. They can jump in to help a domain early in their journey to get something delivered while raising that domain's capabilities. A central team can bring repeatable patterns to find easy paths for new domains. That way, the domains still learn by doing but they don't have to learn by repeatedly failing. But once a domain is capable enough, the central team tries to move out quickly to give the domains autonomy to do what's valuable. They are also building a community of practice for practitioners to share insights with each other, providing even more leverage and discovering more repeatable, high value patterns.


When asked about bringing a domain up to speed and the question of complexity, Jimmy strongly believes that you should start simple. You don't hand the 7yr old who wants to help you cook the knife and have them go wild on day 1. Get them into a groove, get them confidence and understanding how to deal with data, then you can start to think about adding complexity. But dealing with data is complex enough, keep it simple for them to deliver value initially.


When thinking about success and things like a new domain's time to first data product, Jimmy believes you can learn where there is friction but the actual times vary quite a bit. So you might see lengthening times for new domains launch a data product but it's a good thing because you are dealing with domains who really aren't sure what they are doing and have to learn a ton about data in general and their own data - that means you are penetrating the less data savvy parts of the organization. All things equal, you want to go faster but just take things with a grain of salt.


There's also the fun of trying to thread the needle of data modeled for the initial use case - so fit for purpose - yet also modeled so it fits well and is interoperable with the rest of the data in your available mesh of data products. Jimmy said this is especially true of domains just getting up to speed with dealing with their data and data modeling so prepare for them to move slower and need more help.


When asked about where there is friction in the process of bringing on new domains that we _shouldn't_ try to reduce, Jimmy pointed to learning and understanding. People need to take the time to understand how to do data work and all that but also understand the new ways of working and why the organization is going in this direction. It's that old are you trying to get them to do the steps you say or achieve the target outcome you give them. Learning takes time, don't rush it.


Jimmy's role is pretty unique as far as other organizations telling their story. He's focused on helping people focus in the right areas but also on helping them connect the dots. There is such a big picture when you think about the entire information scape of an organization and helping people to connect to each other and see where they could enhance that bigger picture is highly valuable. This drives better value while also reducing miscommunications, duplication of work, and wasted time. Scott note: this is somewhat similar to the 'Data Sherpa' concept I've mentioned repeatedly that just about everyone looks like I'm a madman when I bring up.


The question of what to decentralize versus centralize is a tough one for every organization doing data mesh. Jimmy pointed to the fact that where

Links

Chapters

Video

More from YouTube