Artwork for podcast Behind The Product
Sentry: Indragie Karunaratne
Episode 6130th January 2026 • Behind The Product • SEP
00:00:00 00:42:01

Share Episode

Transcripts

Indragie Karunaratne:

And the rubric sounds simple, but it actually took us some time to come up with that. Right. Like when we were going multi product, we definitely built stuff that we definitely don't have tens of thousands of orgs using even today.

Right.

So we made a couple of misfires on that front and then we sort of found out what worked and then we kind of, you know, distilled that into this rubric after the fact. So this is how we think about it going forward, but it takes some time for us to learn.

Zac Darnell:

Welcome to behind the Product, a podcast by Sep where we believe it takes more than a great idea to make a great product.

We've been around for over 30 years building software that matters more and we've set out to explore the people, practices and philosophies to try and capture what's behind great software products. So join us on this journey of conversation with the folks that bring ideas to life. I thought it would be good.

Before we jump into any of the specific questions that kind of laid out, I think it'd be good to just talk about your background. So let me tell us a little bit about that.

I think Lauren saw on your LinkedIn profile you had quite the, you almost got to all of the faang companies in your internship roadmap there. So it'd be good to just level set about a little bit about you and your background.

Indragie Karunaratne:

Yeah, that's a great idea. So kind of going way back, I got my start in programming as a kid, right.

So I used to kind of build software on the side with friends that I met on the Internet. And as I kind of progressed through high school and university, I started looking at it as a more sort of like serious career option.

And that's how I got in touch with a few folks at big tech companies who had been following my work primarily in open source and some of the stuff that I've been talking about publicly and that's how I ended up at Amazon. And then the other experiences at Facebook and Google were just sort of trying to pick up some different experiences at different kinds of companies.

They're all sort of like big tech companies and in retrospect I wish I know, tried startups or something earlier on, but they were different enough that I got sort of like a good breadth of experience there. And so I primarily worked on sort of like client side development.

I did a lot of Apple platform development, building software for iOS, Mac OS, that kind of thing. That was also the focus of my, you know, side project and open source work. So I kind of, you know, built a reputation for that kind of work.

And then after, you know, after I graduated, I was definitely committed to moving to the Bay Area working at a tech company and I looked at all my options and I really enjoyed working at Facebook. So I joined there right out of university, spent about three years working on mobile infrastructure.

So kind of moving away from building more end user product stuff to kind of building sort of like the fundamental platform components that go into making a product that is deployed to billions of users. So a huge scale with engineering teams of hundreds of people working on it. So that was a really cool experience.

And then that led me to go and start a company. So I learned a bunch of stuff about mobile infrastructure and how to make mobile apps fast and performant.

ed a startup called Specto in:

We took some of those lessons. We learned about building performer mobile apps and tried to build a product out of it for the broader market.

tre was acquired by Sentry in:

Zac Darnell:

That's awesome. So, okay, hot take. Of the three big co internships, what was the best one?

Indragie Karunaratne:

Like for real, probably Facebook, because that's where I ended up, right? I definitely felt like, yeah, it was, it was the most enjoyable work environment and especially for the niche that I was in.

So like kind of the iOS and Apple development niche, I saw a lot of people that I had, you know, follow it and grown to respect who had worked on a lot of cool projects in that space, going and leaving to work at Facebook. So at the time that was a number of great product designers and engineers, best at their craft, working there.

And so it was, it was almost less about the company and the products they were building and more that I kind of wanted to surround myself with, with that kind of talent. And that turned out to be a good decision.

Zac Darnell:

Okay, that, that makes sense. You know, I guess, I guess if you like mobile development and you want to be in that kind of a space, Facebook kind of makes sense.

Although they all have mobile apps, right? Although I will say Amazon's is one of the worst of all the big companies.

Indragie Karunaratne:

It's pretty bad. Amazon, I would say is, you know, they definitely built great infrastructure. AWS is great, it's rock solid.

But on the consumer facing side of things, they don't tend to value, you know, design and stuff as much as some of the other companies.

Zac Darnell:

So that checks out not at all.

Lauren Lecoffre:

So sometimes as a startup or like an innovation space, especially in AI right now, there's so many times where you get new inspiration and you just want to drive into this new direction while you're building your product. So how do you balance that threshold for yourself?

Indragie Karunaratne:

Yeah, great question. So I think that's especially relevant today, especially in the AI space. Right.

So if you kind of follow what's happening, there's new products coming out basically every week, you know, huge new announcements, new companies being founded, you know, new ideas on how to build the best AI products.

And so there's like a balance to strike between kind of keeping up with the, the fast pace of the environment you're in which you have no control over. Right. So you kind of get with it or you get left behind.

So there is an aspect of that, but then there's also an aspect of you're at a mid sized company with a few hundred people on staff. Changing the direction of that ship takes a lot of effort and it has consequences if you point it the wrong way.

So I've always tried to sort of strike the balance between that. Right. So we have, you know, we try to long term plan for things that we want to commit significant time to that we really believe in.

So there's, you know, most of the people are working on these long term projects, they're not, you know, constantly rotating on, you know, whatever the new thing of the week is. But I always reserve part of our sort of budget.

So like a few engineers on the team that are able to jump in and start prototyping and validating new concepts quickly so we can at least get an idea of something that's if that's something we want to turn into a long term investment. So I think it's, yeah, it's about that balance, it's about allocating resources that way.

But yeah, we've done our best to sort of maintain a more sort of like smaller startup experimentally nature while being a real business.

Lauren Lecoffre:

Great.

Zac Darnell:

That's a hard thing to balance.

Indragie Karunaratne:

Yeah, absolutely.

Zac Darnell:

And you know, I don't know, you know, my career, I feel like I've seen companies do that really, really well and others struggle to, you know, continue on that innovation path while still taking care of the things that are important and paying the bills. Right. Like that's important. Let's, let's also level step for folks Listening who may not know much about Century.

So from what I remember hearing when I met a few of your folks at GitHub Universe earlier this year, started out as error monitoring. That was kind of the core focus of the business and then has expanded into other areas.

Things like performance, session replays, probably a whole lot more. It's more about more of a platform these days. Tell anybody. Listening. Give us the quick and dirty on Century.

And then I'm kind of curious about some of the evolution over time, but we'll come back.

Indragie Karunaratne:

Yeah, so yeah, you're totally right. So Sentry spent about the first 10 years working on a single product, the error monitoring product.

And so Sentry is basically an application monitoring platform. So essentially what we do is customers ship the Sentry SDK in their apps.

The Sentry SDKs collect a bunch of telemetry signals about how the app is behaving. So that's primarily errors, which is what we started with.

But we've grown to add all kinds of other telemetry like traces and logs and profiles and things. And all of these are sort of a piece of the puzzle about how an application behaves when it's actually running in production.

And so our goal is to help developers diagnose and fix a broad set of failure modes that happen in real world software systems. And most of these failure modes are caused by some sort of bug in the code. Right? Bugs in the code are not the only reason for failures.

Like there could be some sort of infrastructure provider outage, GCP or AWS goes down and then your software breaks, of course. But a lot of bugs are caused by developers writing logically incorrect code. And so a lot of that is what we're trying to solve.

So that's kind of where we started. And that's. I can talk more to the multi product expansion, but that's kind of the gist of the product.

Zac Darnell:

No, that makes total sense. I mean, and really it's that multi product expansion. I think that is probably the most interesting thing to dig into because that's challenging.

Like I go back to Lauren's prior question, like that balance is really hard. How do we continue to go forward while also making sure that the thing that got us here doesn't hamstring us.

But we also don't forget the customers that we've built along the way to help us keep going forward. So maybe maybe talk through some of what that looked like and then I'm kind of curious, you know, like what did that balance look like?

How do you, how do you keep the sprawl and the focus together at the same time.

Indragie Karunaratne:

Yeah. So we definitely continue to invest in that core error monitoring product. Right.

It still remains our biggest product, you know, largest by number of customers, revenue, like whatever metric you look at. But what we grew to kind of understand over time is that so errors are definitely the most kind of fundamental piece of telemetry.

You need to debug the most critical failure modes of a system. So if something crashes, there's an exception or something, it kind of brings the program down typically.

And so that's a very visible failure mode in the eyes of the user. But then we also realize that there are a number of things that are less black and white. Right.

So something might be slow, which means that it still works, but it's still a subpar user experience. Right. So those are kinds of things that you might want to consider fixing if you're trying to build a high quality piece of software.

mputing. So like in the early:

And then one of the things that resulted from that was this architectural change where people started adopting these like microservices architectures.

So instead of running like a single program, like on a single machine or whatever, they would build these complex distributed systems where they're breaking up their app into multiple services, distributing them on a bunch of servers, and suddenly debugging those systems gets a lot more complicated. So that was sort of like the origin of our second product. So our first kind of expansion into being a multi product company.

g. At the time, this is about:

And we can talk about that a bit later.

But what this essentially was under the hood was a distributed tracing product that allowed developers to basically trace a request as it goes through multiple layers of their distributed system. So it'll go from service to service to service.

You can kind of track all the hops, see how long each hop takes, see how long all the summer operations take, and it gives you great visibility into a system that is distributed.

And so we kind of, again, we eventually focused on solving these like, performance monitoring use cases, but it was actually a fundamental piece of technology that grew to be important for more use cases than just performance. And then on the focus question, Right. So again, we continue to investing in error monitoring the whole time.

This new initiative started out as kind of like a small team within the company building a new product. And that's still largely how we build new products today.

So we have sort of like the stable focus areas, like I was saying to Lauren's question, where there's kind of like a lot of investment in those.

And then we start up these, like, kind of smaller startup like teams within the company whose goal it is to get a new product idea from 0 to 1 and validate product market fit really quickly.

Zac Darnell:

Oh, okay. Can we talk about. Can we talk about engineering culture for a minute then?

Indragie Karunaratne:

Yeah.

Zac Darnell:

So I could make the assumption, just based on working in tech for 20 years, that most of your folks want to work on some of those smaller startup teams. I don't. They may not necessarily find it as fun to work on. Kind of the cash cow. Right. Let's keep the business running. Have you managed that?

Indragie Karunaratne:

Yeah, that's definitely true in the general sense. People always want to work on the exciting thing. And we're seeing that again with AI. Lots of people at the company want to work on AI stuff.

Obviously we can't staff the entire company on AI things. There's plenty of things to do to keep the lights on. But I think it does appeal to different kinds of people in a way.

So that we do find that they're product engineers who really, really like the zero to one thing in addition to just the engineering part.

They really like being part of like the product development life cycle where they're ideating, coming up with new product concepts, talking to customers, validating these things. And then there are engineers who don't like doing that stuff. They just like solving hard technical challenges.

And the hard technical challenges usually come about at a particular scale of product. Right? Like the error monitoring. Yeah, a certain maturity, like a certain place in the growth curve.

And so the engineers that want to work on that stuff, they tend to work on, like the more stable stuff because that's where the opportunity to work on that is.

And the engineers that want to just work on new products, product stuff, and don't care as much about the tech tend to work on some of these newer initiatives. There is a natural way where it kind of does split out a bit.

Zac Darnell:

No, that's fair. That. No, that's actually a really good way to think about it. I think I can think of those personalities in our own building, you know, so that.

That's a good idea. What about how you guys make decisions on how much to invest in new things, like as a business, as a team. Okay.

We're Going to go allocate X amount of salary and focus for our annual budget for this new opportunity and you guys make those decisions. Is that something we could poke in at?

Indragie Karunaratne:

Yeah, absolutely.

So there's kind of two key questions that we asked initially when we were trying to decide whether we want to experiment with a new product investment. Right. So question number one is how many of the 150,000 orgs that use Sentry do we expect would want to use this product? Right.

And this is not like a precise estimation by any means.

It's sort of like an order of magnitude thing where it's like we do not want to build stuff that's used by 10 customers or 100 customers or even 1,000 customers. Where it kind of starts to get interesting is the tens of thousands of customers. That's a meaningful kind of part of our user base.

So we try to try to approximate it that way. And that might involve kind of like looking at the competitive landscape, seeing what other customers of Sentry are using in addition to Sentry.

Right. And then the second question is, does this product help software teams fix problems in their software?

So I say this because there is a wide body of work that could be kind of considered monitoring or analytics that is not related to fixing broken code. Right. So you might think of like an analytics product where you can build a dashboard of how many times users are clicking on a thing.

Zac Darnell:

Yeah. Like a mix panel or there's another big one that I can't think of right now.

Indragie Karunaratne:

Yeah, exactly. So there's tons of those products. Right.

And in theory, they're somewhat similar in the sense that they're kind of doing some monitoring and providing some analytics, but they're not about fixing broken code. So that's sort of where we kind of draw the line.

So basically, if we can validate that we think tens of thousands of people or organizations are going to use this and it is serving the goal of fixing broken code, then that's something that we would be interested in investing in.

Zac Darnell:

That's really cool. I love hearing that. Like there's a rubric and if it doesn't meet this rubric, it's not something we're going to consider. Again, like, sounds simple.

I think it's hard.

It's hard to live out because, man, what if you found an opportunity for 100,000 people that you could sell a thing but it didn't really align to fixing broken code? Like, I think that would be a very easy thing to fall into. That might be derailing direction of aligning to your core business.

Indragie Karunaratne:

And the rubric sounds simple, but it actually took us some time to come up with that. Right. Like when we were going multi product, we definitely built stuff that we definitely don't have tens of thousands of orgs using even today.

Right.

So we made a couple of misfires on that front and then we sort of found out what worked and then we kind of distilled that into this rubric after the fact. So this is how we think about it going forward. But it took some time for us.

Zac Darnell:

To learn any favorite failings. I feel like that's such a fun thing to think back on.

Indragie Karunaratne:

Yeah, I mean, failings in the sense that it's not used by as many customers as we'd want. Right. I think that's sort of like the way that we would define that.

They're still products, they still serve a useful function to some subset of the customers. But one example, and this one's near and dear to me because this is what the company that I built was.

This is the product that we were building before it got acquired by Sentry. So we were building a CPU profiling product. Right.

So basically the idea is that we would collect these really low level metrics in production about how long specific functions took to run into code.

It's a lot of really dense, really detailed data and it's really useful if you're doing detailed performance engineering or performance optimization on your system. The downside, it's very powerful data. The downside of it was that it was very difficult to read, understand.

The typical developer would have to spend some time ramping up and understanding how the tech works and how to interpret the visualizations before they could actually use it. And so the profiling product has a very small percentage of Sentry customers using it.

The customers that use it, that enjoy working on performance and performance optimization problems, they get a great value out of it. But our typical customer is not concerned about performance to that extent. And even the ones who are, they often don't know how to use the tool.

And so that's an example of a thing where we definitely don't have tens of thousands of orgs using it. And so in retrospect, maybe that could have come later in the multi product expansion if we were to build it at all.

Zac Darnell:

No, that's a good, that's a good story. I appreciate that. So, okay. As we continue forward in the multi product opportunity, you know, a lot of companies fall into one of two big buckets.

It's more nuanced than this, I know, but for, for simplicity's sake or sales driven. And as product driven, do you, do you feel like you guys fall into one of those two buckets?

Indragie Karunaratne:

Yeah, we are definitely a product driven company. So the goal has always been ubiquiti. Right. So we are primarily a bottoms up, self serve kind of product.

That's why we have 150,000 organizations using it. We support 100 plus platforms and frameworks.

We just want every developer and in particular every kind of early, fast growing software team to adopt the product. We do have an enterprise sales motion, so we do sell into some of those larger organizations.

But like I was saying, with focus, we, we rarely build features that are specifically for these large orgs. Right. We typically focus on the thing that speaks to our sort of like self serve, you know, small developer team kind of audience.

So yeah, we're definitely primarily a product led company.

Zac Darnell:

Okay. No, that's, that's really interesting.

Have you got, can you recall a time where there's been a little bit of tension between being more bottom up, more product driven and hey guys, I got a hot deal on the line. Can we get this thing going?

Indragie Karunaratne:

Yeah, that kind of stuff happens a lot. Right. And the incentives make sense on both sides.

I mean, on one side there is this really big great name customer and the team wants to close the deal. On the other side, we've committed most of our engineering resources to building things that are kind of more ubiquitous. Right.

So I won't say that we don't make exceptions at all. We occasionally do help some of these larger customers with something that is critical to close a deal.

So it's not like a hard and fast rule that we won't do stuff like that.

But there is that tension and I kind of, that's sort of one of the challenges of like, you know, managing teams and, and working here at Sentry is that you kind of have to know which opportunities make sense to make an exception for.

Zac Darnell:

So yeah, definitely that makes sense. You know, running a business and, and leading people is not simple.

Indragie Karunaratne:

Yeah, absolutely.

Zac Darnell:

Love that. Well, I'm kind of curious about, you know, still in the expansion, the multi product space. I think fun stories are good.

One of the things I was thinking of prior to the show was around, okay, is there a, is there a thing that surprised you guys or you specifically in either a customer adoption bucket or maybe like a technical complexity like, oh man, that was a lot harder than we thought it was going to be. And any of the products when you kind of started taking on the multi product strategy?

Indragie Karunaratne:

Yeah, there are two examples that come to mind. So the first example is around that technical complexity piece.

So I mentioned that our first multi product expansion was building this distributed tracing product. And I would say that to this day, as hard as we've tried, it's still pretty difficult to set up an instrument correctly.

It is probably the most difficult to set up product that we sell.

And the reason for that is because for a distributed tracing product to work correctly, you have to have sentry correctly instrumented across every service in your stack to get that end to end visibility. And then there are cases where for some reason the tracing does not propagate from service A to service B due to some misconfiguration.

And so we often end up needing this sort of like high touch onboarding to help customers figure out how to set this up correctly. Which works great with an enterprise sales model where we typically do have people kind of helping customers directly.

But it does not work as well with the self serve model where we kind of expect people to read the documentation, follow the steps and figure it out because it often does not work out of the box. So we've done a lot there to kind of make setup as simple as possible and automate as much of the configuration as possible.

But yeah, to this day it is a difficult product to set up. And I think that's a known things sort of industry wide in this, in this, this product category. The other example that I would give is logs.

So on adoption this one was interesting because it was, it was incredibly, it has incredibly fast growth. We've only had this product for, you know, less than six months at this point and it has grown faster than any other product we've built before it.

I guess the most interesting anecdote here is that in hindsight you could think of logs being like a pretty obvious customer need, right? Like if you're a developer.

Zac Darnell:

I'm kind of confused by this conversation right now.

Indragie Karunaratne:

Yes, we're confused too at times we're like, why didn't we build this sooner? Right? And I think there are a couple of good reasons for kind of, you know, deferring it until later in the multi product growth cycle.

One of them is that if we had built it earlier, it might have been a worse product.

And the reason I say that is because we kind of over the time, you know, building multiple products, we kind of focused on this connected model where all of the different signals that we collect are correlated to each other directly through what we call a trace identifier.

So it doesn't matter if you're looking at logs or errors or whatever, you can see the logs that are associated with a particular error, or you can look at a distributed trace and go from that to the logs or errors connected to it. This connected data model is something that we took time to develop. And then logs was an obvious. It plugs in perfectly into that model.

But if we had built it before, we had kind of built that out, like built out that whole trace connectedness story, we might have built yet another sort of commodity, disconnected, siloed logging product. So in one sense there was a benefit to waiting till the strategy was more mature.

On the other hand, after errors, it was sort of like the most obvious logical next thing that all teams would need. It was probably that and not distributed tracing. Right.

Zac Darnell:

Well, but I mean even that is kind of a surprising aha.

Because what might feel lost, logical in the moment come to find out like, well, it was actually good that we delayed this because we learned some stuff that informed this to make it less of a disjointed feel in your product. Because I think we've all used stuff where it's like, oh, you bolted that on and that kind of sucks.

I don't really love what you just added into that thing. And like nobody, nobody likes that. Right. When companies do it.

So that's actually really interesting because you're right, I would have said the exact same thing. Like errors, logs, like that. That makes total sense.

Indragie Karunaratne:

Seems obvious.

Zac Darnell:

Right? I wonder if, I wonder if we pivot a little bit.

You know, AI, we're on our third year of AI hype cycle and I don't know, you know, I feel like in one hand I have AI fatigue and on the other hand it's continuing to evolve really fast and new stuff comes out. So I guess take my five hour energy. When it comes to AI, you guys have an AI product. If I remember correctly, from what I read, it's called seer.

Indragie Karunaratne:

That's right, yeah.

Zac Darnell:

Okay. And it's really around rca.

Indragie Karunaratne:

Yeah, yeah, it's currently around two things. Rca, so root cause analysis and then a code review.

So we've built SEER into the pre production part of the development life cycle so it can review code before it goes to production.

Zac Darnell:

Interesting. Is that like meant to be a companion to something like a Claude code or a GitHub copilot?

Indragie Karunaratne:

Yes, in some way. Right.

So like we know that people are going to be generating code with coding agents and we know that people are still going to be writing code by hand. And it doesn't Kind of matter where the code came from in interview.

And you could argue that potentially AI generated code needs even more review because you didn't write it and you 100% agree with how it works.

Zac Darnell:

Yes, yes.

Indragie Karunaratne:

Yeah. So independent of how the code is created, we want to make sure that we try to catch bugs before they go to production.

Zac Darnell:

Oh, interesting. So is it more around finding bugs and less about giving a developer and engineer specific feedback on their actual code?

Indragie Karunaratne:

That's actually something that we debated early on. We were trying to position this product because there are a number of products in this AI code review space that are sort of more generic.

Like, they'll give you feedback on code style and patterns and things like that. Some of those are just sort of like maintainability and stuff and not so much about the correctness of the code.

And so we decided that, you know, going back to our sort of, like, you know, focus on building products that help developers fix bugs, we were going to do the same thing here with code review, which is that instead of building a general review agent, we wanted to build something that specifically focuses on helping people find bugs in their code. So the coding agent is obviously.

Sorry, the code review agent is deliberately designed to not sort of like, comment on stylistic things and things like that and focus exclusively on that category of problems.

Zac Darnell:

Okay.

Lauren Lecoffre:

If I can jump in on what I was thinking about. No, you're good. The team is here. Some of the engineers will do something that is like a code review in a test script format.

And what that does is allows somebody to go work on something else while that's running. But we have some impediments and limitations in that, specifically because it's manually coded inside of the code that we're working in.

So can you tell me about what other problems that you guys were facing that made it necessary. Necessary to develop this part of the product?

Indragie Karunaratne:

Yeah. So Sentry primarily focuses on production monitoring. Right.

So basically what we can do is we can help you fix bugs after they've been shipped to customers. And that's not ideal. Right. So you have broken code, gets it to customers.

Customers see some sort of visible failure on their end, and then by the time they're complaining about it, then Sentry can go in and be like, oh, hey, here's the bug you shipped, here's the patch. Now you've got to go make that code change, redeploy your services, and it'll take some time for it to actually take effect and verify the fix.

And so we decided that as much as possible, we Wanted to shift that left, meaning as many bugs as you can catch earlier in the cycle before it goes to Prod. That's even better. Right? Right. So users never see it.

And so by kind of the advent of AI and LLMs kind of gave us the ability to build a review tool that was able to catch more of these issues. And we know that we're not going to catch all of them. Right.

So there's going to be still, there's still going to be still some percentage that make it to production. There are types of bugs that you cannot find during code review.

Like for example, a real world failure mode is that you have some sort of upstream infrastructure provider outage or like one of the services in your distributed system goes down. Those are not interactions that are predictable by just like reading the code, generally speaking.

So we know that people are going to ship bugs to Prod. So production monitoring is still important and we still invest in that.

But we know that there is a potential that we can catch some reasonable percentage of bugs before the code ever makes it to users.

Lauren Lecoffre:

So is that resolution metric something that you guys are tracking and like to make visible as somebody's not seen your product?

Indragie Karunaratne:

Yeah, we, we are definitely tracking those internally. We're currently also talking about making some of those metrics available to customers. Right.

So some sort of statistical dashboard on like, you know, here are a bunch of things. Here's how many things we caught for you during code review, here's how many things we caught during production.

And so you ideally want to increase the ratio of bugs caught review time versus bugs shipped to production. And as we improve the code review agent, we're hopefully catching more of those bugs in development.

But yeah, it's definitely something that we're tracking and we want to expose to users in some form.

Zac Darnell:

How much? Kind of general question like not just in the AI bucket, but like in all of it.

Because you know, even Seer is part of probably the multi product strategy. How much is client feedback being product driven driven a lot of these decisions for you guys?

Sorry, it was just a thought that kind of came to mind because you're taking a very interesting slice on some of this and. Yeah, yeah, how much is that? How much is that weighed in on kind of the roadmap and the decisions that you guys have made.

Indragie Karunaratne:

Yeah, so you start talking about like customer feedback driving some of this product.

Zac Darnell:

Yeah, I mean like we mentioned platforms like Mixpanel or Amplitude.

That's one of the other ones I could about think, think of that kind of give you client, customer usability, insights into how people are using your products that can inform some of this. You can do interviews, you can look at competitive landscape.

But I don't know, sometimes when I talk to other product leaders over the years, sometimes, like sometimes you just need to get on the phone with people and ask them some questions, you know, and it's, it's a little bit less formal.

So I don't know, I always wonder what that looks like for different companies, because I don't know that there's a wrong way other than not doing it, you know?

Indragie Karunaratne:

Yeah, I totally agree with that. I mean, we definitely listen to customer feedback.

I would say that we don't build stuff purely because a bunch of customers ask for it, because you could imagine a ton of customers would ask us for some analytics thing that we don't want to build because it's sort of not core to what we're trying to build. But we definitely do take those feedback signals into consideration. I'll give you a really interesting example.

So we built this product, we call it User Feedback, and it's one of the lesser known Sentry offerings.

So what user feedback does is it lets you embed this little feedback widget on your website for your mobile app and so customers can fill in their name.

And here's an issue that I ran into while using your product, but the cool thing is that it's linked to all of the Sentry telemetries in that session. So you can actually view a session replay that shows the user navigating through your product.

It shows them hitting whatever problem that they were telling you about. So you can see that live. And it's already, it's already connected to all the other telemetry. So it has your logs, it has your errors and everything.

So it actually is like a very comprehensive set of debugging signals combined with this qualitative customer feedback. And so what we did internally, so this is the product we offer to customers, we also integrated it throughout Sentry itself.

So if you go to Sentry and you press that give feedback button, that feedback is typically routed directly into a team channel in Slack, where the engineers and the product designers and the PMs who are working on that product are reading that feedback live. And you would think that this gets really noisy. And sometimes it does.

But the reason why we find this useful is because capturing some of that information in the very moment of frustration is some of the most valuable feedback we can get. And the downside to it is that you often get the most angry Voices because they're so frustrated that something didn't work.

But the great thing is that it gives you a very honest signal into what the real pain points of the product are.

Because if we go and schedule calls with a bunch of customers, you know, have a PM sitting on a call, asking them questions about how they use the product, like, you know, they'll say stuff, but it's like, you know, is this actually important or are you saying this because we ask you the question? It's really hard to discern sometimes.

Zac Darnell:

That's just what's top of mind for you as I'm asking you this.

Indragie Karunaratne:

Yes, totally. Yeah. And it's.

It's really hard to get a prioritization signal because if you ask customers what they want, they'll give you a laundry list of stuff they want. But it's not actually. It's not actually targeting a real pain point unless you catch them in the moment where the pain is being inflicted. Right.

And so I think it's been a really useful thing for us.

Zac Darnell:

That's really interesting. It's like you're describing tacit knowledge versus, like, no knowledge. Right.

Tacit stuff you can't articulate until somebody pokes at it a little bit. Right. Or it's experienced.

Indragie Karunaratne:

Exactly.

Zac Darnell:

That's really smart. I don't know, Lauren.

As you put your product hat on, I would imagine, I don't know, maybe that's rubbing up against some of the practices that you hold near and dear to your heart. I don't know.

Lauren Lecoffre:

Yeah, it's similar to research questions. Right.

As a UX researcher, some of the ways we lead or like the situation environment where we ask those questions need to be strategic and organic and simulate the real life experience as much as possible. So I think that's a great approach.

I was laughing because it's one of my favorite gestures is when I, like, shake my phone and it's like, what's wrong? And I'm like, strangling it. Right. I wonder if you have that enabled for your laptops.

Indragie Karunaratne:

This is a great anecdote one because I'm very familiar with rage shake having. I think Facebook actually invented this. We were very familiar with getting Rage Shake reports from users when I worked there.

But the other thing we did at Sentry is we actually built this thing called rage clicks and dead clicks. So if a user is on a page and they repeatedly click on an element that's not responding, that gets logged as a rage click.

So we know there's a frustration signal there.

And there's also Something called a dead click, which is similar, but it's like, oh, they kind of clicked once and the element didn't do anything or something. So those are all those, like, implicit signals where a user is not going in and, like, directly giving you feedback necessarily.

But just by their behavior, you can kind of deduce that there's something wrong with the product. I'm.

Zac Darnell:

I just learned a new thing, so thank you. I've never shaken my phone really hard, so if I shake it, will it, like, start yelling at me? Like, I don't. I don't know.

Apparently I've not gotten so mad at my phone that I've rage, rage shaked it.

Lauren Lecoffre:

I'm just more emotional than Zach.

Zac Darnell:

Well, I definitely had rage clicking right before we were trying to hop on here to record this show on the other platform, Riverside. Why isn't this button letting me into the studio? Daggone it. Oh, man. All right, so before we wrap up here, the, like, the two other things that were.

That were on my mind, you know, one path is more around, like, how you've navigated hyper growth, because I think you guys have had some pretty. Some pretty good growth from what I've seen over the last few years, especially as you, you know, have dived into multiple products. Right.

That's a lot of growth all at once. But also I would.

I would imagine I could make the assumption that because you guys are more of an SDK platform for engineers, you're kind of a developer first kind of a mindset. And so, like, there's some nuance y things in there as well. I don't know. Which path do you feel like would be more interesting?

Indragie Karunaratne:

Yeah, I think we could talk about the developer first stuff because I think supporting all of these platforms that we.

The 100 plus platforms that we support is kind of an interesting story because it's not that common that a company tries to prioritize making something work for that broad of an audience. So I think there's definitely some interesting stuff there.

Zac Darnell:

Well, yeah, tell us a little bit about that, because I would imagine that keeping consistency would be really hard and scale would also be very difficult in that broad sense.

Indragie Karunaratne:

Yeah. So on the consistency piece. Right.

So the typical approach that we take when we're trying to build SDK support, so, I mean, the process goes like this. So we have a product that we want to build, right. Whether it's distributed tracing or logs or something.

And so we start building the product and then we realize that we pretty much always need some additional information at the SDKs and to collect. And so we do have a lot of SDKs that cover, like I said, a very broad range of platforms. But there is an order of importance there.

So Sentry's biggest audiences are typically full stack developers that are building apps in Python and JavaScript and JavaScript, meaning both browser and server side JavaScript. So typically, whenever we need to build SDK support for a new feature, those platforms get prioritized first.

And when the SDK's team goes and builds support for those first few platforms, they come up with a reference architecture, an API, and other implementation details that are fully documented and are made available as a spec to other implementers.

So when the other dozen SDK authors go and build their implementation of that, which looks slightly different in each language, they still stick to a set of common conventions around how users interface with that feature and how the implementation is designed. And one of the other interesting things about this is that a lot of the documentation about how Sentry itself is built.

So we're not talking about documentation for end users of Sentry, but we publicize what we call develop docs, which are basically our internal documentation for our own development teams that we publicize because Sentry is an open source product.

So you can actually go and read in the develop docs a lot of the SDK design philosophies, architecture decisions, the history of decisions that were made. So all of that is very well documented and that's how we maintain the consistent experience across all these platforms.

Zac Darnell:

Has being an open source platform, I feel like you kind of, again, for simplicity's sake, fall into two camps. Like open source is like awesome and everything should be. And then it's like some folks kind of believe like, yeah, open source is nice, that's cute.

But like we're here to make money. Obviously you guys have built a really cool company and a business from being an open source platform.

Like, how much is that influence influenced some of that developer first approach?

Indragie Karunaratne:

So Sentry is, you know, Sentry does not fit the definition of conventional open source. So we are not licensed under a conventional open source license like MIT or Apache. What we have is something called the Fair Source License.

And so basically what this means is that Sentry, the code is open source, right? The, you know, the back end, the SDKs, all components of the system are open source. People are free to take Sentry self host it.

We have a lot of large companies and small companies that don't pay for sentry SaaS but want to sort of like self host it because of Data residency requirements or security requirements or whatever reason they have. So you're free to sort of host it by yourself, use it at your company. You don't have to pay us anything.

But the thing that the Fair Source license is designed to prevent is for example, some sort of other SaaS vendor of, you know, cloud software taking Sentry kind of wholesale hosting it and then charging people to use it.

So there's sort of like a, you know, a third party operating, you know, a clone of Sentry and making money off it while not contributing to the development of Sentry. And so the way that the FairSource license works is that for a period of two years those, those terms apply, those restrictions apply. Right.

So you cannot sell, you know, Sentry to your own customers. But after a two year period, that license turns into Apache 2 or MIT. So at that point it is fully open source. Right.

So you can do whatever you want, you can sell it.

But it is with that two year time delay which sort of has a nice forcing function, which means that we need to continue building and innovating and developing these new features and new capabilities that people want to pay for. But it also does give people a convenient means to sort of read the code, self host it and doesn't.

And it helps us sort of like, you know, protect our IP and not worry about competitors trying to sell the product without contributing to its development. So that's kind of the balance that the FairSource license strikes.

Zac Darnell:

Okay, that's actually really helpful because, you know, I've got, I've got, you know, a thumble full of experience with MIT or what's the other one? Gpl, I think General Public. I forget what the acronym stands for. And it's more about Go forth and conquer.

So this, that's really interesting that you guys have found kind of a really nuanced path to both protect you guys as a business, but continue to support kind of that developer first community. I love that. That's really, really interesting.

Well, I know we're kind of, but coming up on time here, love to go a little bit deeper, but this has been really fun. I've learned a lot. Love the stories.

I appreciate you kind of letting us behind the scenes, not to use our own name in the show here, but this is kind of the point of the show was to be able to go a little bit deeper on what has happened to help you guys get to this point. And so this is really, really fun. And I learned a couple of new terms.

I'm going to go shake my phone a little bit and see if I can get the dialogue come up.

Indragie Karunaratne:

Try the rain shake. Yeah.

Zac Darnell:

Yeah, just a little bit. So thank you so much for spending some time with us.

Indragie Karunaratne:

Thanks, Zach. Thanks, Lauren. It was great meeting you.

Lauren Lecoffre:

Yeah, great meeting you too.

Zac Darnell:

Yeah. Thanks, Lauren, for being my co host. Appreciate it.

Lauren Lecoffre:

Hey, happy to be.

Zac Darnell:

Sam.

Chapters

Video

More from YouTube