Artwork for podcast Data Mesh Radio
#159 Focusing on the Problems - And Business - at Hand in Your Data Tool Selection Process - Interview w/ Brandon Beidel
Episode 15924th November 2022 • Data Mesh Radio • Data as a Product Podcast Network
00:00:00 01:16:15

Share Episode

Shownotes

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/

Please Rate and Review us on your podcast app of choice!

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here. You can download their Data Mesh for Dummies e-book (info gated) here.

LinkedIn: https://www.linkedin.com/in/brandonbeidel/

In this episode, Scott interviewed Brandon Beidel, Director of Product at Red Ventures.

Some key takeaways/thoughts from Brandon's point of view:

  1. Be willing to change your mind, especially based on new information. Be willing to measure and iterate. It's easy to get attached to tools or tech because they are cool. Don't! Stay objective.
  2. It's crucial to align on what problem(s) you are trying to solve and why before moving forward on vendor/tool selection, no matter build versus buy. If it doesn't have a positive return on investment, why do the work?
  3. Beware the sunk cost fallacy! It's easy to not want to shut something down that you've spent a lot on. But don't throw good money after bad.
  4. When requirement gathering/negotiating, have a 'maniacal focus' on asking "what does this drive for the business?" You can quickly sort the nice-to-haves from the needs and you can have an open and honest conversation about cost/benefit of each aspect of a request.
  5. When thinking about maximizing value, there is always one constraint that is the bottleneck. You can optimize other things but they won't drive the value. Find and fix the value bottleneck.
  6. A simple two axes framework when thinking about use cases and requirements is value versus complexity. Look for high value low complexity first.
  7. Be open and honest in discussions around expected costs of work/tools - which can be considered part of the complexity. The data consumers understand the value and can weigh the return on investment.
  8. It's very important to understand data consumers' incentives so you can collaboratively figure out what is best for all parties.
  9. Look to create - in the open - a decision journal relative to build versus buy / vendor selection. It will create an open environment and get your thoughts better organized.
  10. Your decision journal will make it easier politically to say you have new information and should consider a change. And you can better measure against if your assumptions were right or it's time to reevaluate if a tool or solution is still working for you.
  11. It's very crucial to look at potential major success of a tool selection - what happens if our use is 10x, 100x our initial expectation? That can lead to really poor unit economics in the future for certain selections so it shouldn't be overlooked.
  12. It's easy to over innovate. Think of having a certain number of innovation tokens. The cost of change is real and also hits people's patience. Look to see if existing tooling or capabilities support most of your use case first.
  13. Total cost of ownership - not just initial purchase cost - is crucial. How much of your team's time will be spent managing and maintaining the tool? Look especially at skills, governance controls, and ability to measure if you are successful.
  14. Perfect is the enemy of good in choosing tools. Use a well-defined process to avoid really bad decisions but spending time to find the absolute best solution when any one of six choices will do just fine is rarely worth it.
  15. Having your reasoning and process written down and in the open drives trust. Trust towards the initial decision and trust for when it's time to reevaluate at tool. It also makes it easier to spot if something relative to your initial assumptions has changed.
  16. Seek out those who might be the most against your decision. Take the time to understand their pain points and concerns; try to incorporate their concerns and align their incentives if possible.
  17. When adding a new tool or serving a new use case, focus on how you will measure if you are successful now and in the future. It doesn't have to be perfect but otherwise, you don't know how well you are doing and will miss out on a great learning experience to do better in the future.
  18. When you select a vendor, there is a logical time to reevaluate your choice and if it's right going forward - the contract renewal. And there are easily defined economics in play. You should do the same for anything you've built - set an artificial time to reevaluate, don't wait for things to go bad first.
  19. Consider using the anti-corruption concept from microservices in data. You can avoid a lot of data integration costs and you are more easily able to rip things out of your platform. But it's okay to leverage proprietary solutions too, just be cognizant it may become an issue.
  20. Involve the data consumers early in the process around serving their use case. And it helps for them to have skin in the game so they are focused on driving to the most business efficient outcome.


Brandon started off with a theme he'd hit on multiple times because it's so important: before proceeding on selecting a tool/solution, agree on what needs to be done and why. What will this drive for the business? It's easy to lose the forest for the trees - or even the leaves - in building out data platforms. The first part - agree - is necessary because you need alignment to move forward with the proper understanding of the problem to be solved. The what needs to be done and why part means there is a clear roadmap and that you have a specific problem you are trying to solve when doing your tool evaluation instead of focusing on the tool or feature.


Having a maniacal focus on 'what does this drive for the business' will mean you can align better on what is needed for a use case versus "a Christmas list" as Brandon put it. Having clear and open communication about what is a requirement versus a desire and the cost of each potential item on a data consumer's list has led to very efficient prioritization for him.


A key way of working when embarking on a new use case is to involve the data consumers early on - and make sure they have skin in the game, according to Brandon. The data team's engineering time being on the data consumer's P&L means the data consumers are more focused on driving to key results than cool features or nice-to-haves. And having open and honest discussions about the expected costs to deliver on each really helps them weigh the benefits. An important part of getting to a good outcome in these discussions is understanding and attempting to align on everyone's incentives.


Brandon mentioned how when discussing cost/benefits and different platform approaches, it's very easy to get overly complex. But that hurts the conversation and often devolves into technical discussions with people who care about the business output, not the tech. Brandon has two axes that he uses - complexity and value. Don't overcomplicate it. It's pretty easy to start with use cases that are high value and low complexity when you start to look at it through this lens. High value but high complexity use cases are tough but can obviously provide very significant value when you've taken care of the low hanging fruit.


One thing Brandon mentioned - and Scott recommends more broadly for data mesh journeys - is a decision journal. Having a place in the open where you write down the criteria for a decision makes it so people can feel more comfortable with the decisions made. What were the capabilities needed, what was the problem, what was the expected value, etc.? When getting down to the decision itself, how viable is the solution, what are the alternatives, what is the likely cost, what are the failure scenarios, etc.? It helps you to reevaluate in the future as well and have empathy with past decisions. Brandon has a list of many more crucial questions.


A really interesting point Brandon brought up regarding writing out your decision criteria is what happens if it's wildly successful. What happens if the tool/feature you choose, whether build or buy, has 10x the expected usage? 100x? Are the unit economics going to be good or will this potentially cause issues and how do you plan to adapt?


According to Brandon, looking at total cost of ownership - not just the short-term or initial purchase cost - is crucial when selecting a tool. Do you need training to actually leverage the tool and manage it appropriately? Does it integrate well with your existing platform/tools? Again, this circles back to value versus complexity. Costs should be factored into the complexity discussion.


Brandon emphasized perfect is the enemy of good. There is rarely a good return on finding the absolute best choice - the real benefit is in avoiding the wrong choices. If there is a 5% better return on tool B versus tool A but you had to spend months figuring that out - or what if it's 6 tools…? - that's not worth doing.


As part of Brandon's decision journal recommendation, he circled back on a few other benefits. A big one is that people are more likely to be aligned with the decision if they can follow the logic. If it's just a choice instead of seeing why the choice was made, there's often more friction and pushback. Also, it's easier to monitor if things have changed relative to your assumptions when you have your assumptions explicitly stated :) Having these assumptions on paper also gives you better buy-in to make changes because again, people can follow the logic.


When it comes to driving buy-in, Brandon recommends seeking out the people who are most likely to be detractors to your potential solution. Use collaborative negotiation. At the very least, go and understand their context and pain points. Try to incorporate that into your solution and look to align incentives where possible. Too often people don't feel seen or heard.


As many guests have mentioned, look to set your success criteria and especially ways of measuring before you start implementation work. It doesn't have to be perfect but otherwise, are you able to measure when you are doing well? And you can learn from things that don't go to plan much better if you can measure against an actual plan.


Brandon discussed how when you make a choice to go with a vendor, the contract renewal - or specifically a few months before the renewal - is the time to evaluate if it was a good choice and if you should continue forward with that choice. You should set up an artificial timeline to do the same for anything built internally. Instead of waiting for signals that you've made a wrong choice, regularly reevaluate. It's important to reflect back and see if it's actually solving the challenges you wanted it to solve.


Beware the sunk cost fallacy! According to Brandon, it’s very common to want to chase things where you've already spent lots of time and/or effort. Or if it had a lot of promise and isn't meeting the expectations. Don't throw good money after spent money. Take it as a learning opportunity and move on.


Circling back on tool stewardship and total cost of ownership (TCO), Brandon uses a framework of three main things to consider: skills, governance controls, and ability to measure. Do you actually have the people who can leverage a tool? Do you have the governance in place to use it properly? How will you measure if the tools are successful and being used as expected? He had a lot of good examples in the episode.


Brandon recommends people look into the anti-corruption layer concept from microservices in your data platform. It can lower the integration costs and also make it far easier to rip things out. You don't want to focus too much on this though and never leverage proprietary features. You don't need to build every capability from scratch but also don't unnecessarily lock yourself in.


Some other tidbits:

Your business counterparts probably won't care much about which vendor or feature versus what it gives them. Start at the high level mapping what's needed.


You need to define the problem you are trying to solve, not the vendor.


People are only willing to deal with so much innovation. Think of having tokens that people collect from you when you try something innovative and new. That's not an easily renewable resource. Look to what you have already to see if it will work.


When thinking about maximizing value, there is always one constraint that is the bottleneck. You can optimize other things but they won't drive the value. Find and fix the bottleneck.


"Knowledge has a half-life, decisions have a half-life." Don't get analysis paralysis, look to move quickly.


Be willing to measure and iterate. Be willing to change your mind, especially based on new information. It's easy to get attached to tools or tech because they are cool. Don't; stay objective.



Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/

If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

Links

Chapters

Video

More from YouTube