Artwork for podcast The Business Integrity School
Product Management and Tech Ethics with Alyssa Simpson Rochwerger
Episode 810th March 2022 • The Business Integrity School • University of Arkansas: Sam M. Walton College of Business
00:00:00 00:37:03

Share Episode

Shownotes

Alyssa Simpson Rochwerger, co-author of Real World AI and Director of Product Management at Blue Shield California, sits down with Cindy Moehring to explain the difficulties and benefits of managing engineers as someone without an extensive background in technology.

Rochwerger and Moehring also discuss the harms of unbalanced data sets in creating AI, the art of raising concerns before product launch, and the importance of macro and micro thinking.

Learn more about the Business Integrity Leadership Initiative by visiting our website at https://walton.uark.edu/business-integrity/

Links from episode:

Transcripts

Cindy Moehring:

Hi, everyone. I'm Cindy Moehring, the founder and Executive Chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business, and this is theBIS, the Business Integrity School podcast. Here we talk about applying ethics, integrity and courageous leadership in business, education and most importantly, your life today. I've had nearly 30 years of real world experience as a senior executive. So if you're looking for practical tips from a business pro who's been there, then this is the podcast for you. Welcome. Let's get started.

Cindy Moehring:

Hi, everybody, and welcome back to another episode of the BIS, the Business Integrity School. And today we have with us another very special guest, Alyssa Rochwerger. Hi, Alyssa, how are you?

Alyssa Rochwerger:

Thanks for having me, Cindy.

Cindy Moehring:

You're welcome. We are really lucky to be able to spend some time with Alyssa today and to talk about her role as a product manager and responsible for machine learning and AI and kind of lessons that she's learned. But let me first tell you just a little bit about Alyssa. And then we will dive right into the conversation. Alyssa is a customer driven product leader who's dedicated to building products that solve hard problems for real people. And she delights in bringing products to market that make a positive impact for her customers. She has a lot of experience in scaling products from concept to large scale ROI, and she has proven her mettle at both startups and large enterprises. She's worked in numerous product leadership roles for machine learning from organizations such as Figure Eight, where she was VP of Product later acquired by Appen. And there she was the VP of AI and Data. And she also in her career has been a director of product at IBM Watson. Recently, she left that space to pursue her passion in the healthcare space, which is incredibly important right now with everything going on with COVID. And she serves as a director of product at Blue Shield of California. So Alyssa, thank you so much for joining us today. I love the book that you wrote, with your co-author Wilson Pang, who I know is still at Appen. Fabulous book very practical, I think in its application provides a lot of guidance for just people in business generally, that are trying to figure out how to implement AI and machine learning. But I gotta back up and just ask you if you can share with our audience how you got into this space, because I don't think you're an engineer or a data scientist by profession. So tell us about yourself.

Alyssa Rochwerger:

Well, thank you so much. Um, you know, the goal of the book is really to be pretty broad, and to feel the folks who are actually trying to use this stuff in real life.

Cindy Moehring:

Right, right.

Alyssa Rochwerger:

To back up a little bit, I sort of wrote the book for myself a little bit, you know, dating back, I have a liberal arts degree, I'm dyslexic. I don't, I'm married to a computer science engineer. But I'm certainly not one of myself. You know, I pursued an American Studies degree in photography, Tech was sort of on my radar, but not something that I was particularly drawn to. And I ended up with a job at a tech company and customer service department actually. And so I was on the phone talking to people wanting to cancel and frustrated with the product. And so I sort of quickly figured out that product management was a thing. And I learned about, you know, what it was to be able to sort of concept and build and ship things into a production environment. And so that's sort of what what led me to tech and what led me into product management. And later on machine learning, and AI as a really powerful tool in order to enable really big change at scale.

Cindy Moehring:

Yeah, I, you know, I love your story. And I love that we're going to be able to talk today about what this looks like from a product management, you know, perspective, because that is a role in today's world. And it's a really, really important role in terms of making sure that it works right in organization. But I gotta tell you, I don't think a lot of people still understand it. And I think it is something that students in particular need to get more familiar with. And I, I think there's an idea out there in a lot of people's minds that implementing AI in an organization is really just, that's what those IT people do. That's what folks like your husband do, you know, computer,

Alyssa Rochwerger:

Fancy engineers, yes.

Cindy Moehring:

That's what they're there to do. And, you know, we can't begin to, I don't understand what they're saying, or I don't know what it means, just let them deal with it. And so they don't really think that it applies to them, right? In their role as just a business leader. So you don't have a technical background or necessarily?

Alyssa Rochwerger:

No.

Cindy Moehring:

Or, didn't start out with technical job responsibilities. So, So, help us understand, is it really just an IT problem or project from your perspective or what role do others like yourself, have to play?

Alyssa Rochwerger:

Yeah, when I, um, so I think product management is often a technical role or should be, you should become technical enough to kind of be dangerous, it does not mean that you need to enroll yourself in computer science, you know, 101, I have taken some of those Coursera classes myself to, you know, help me understand the lingo and understand kind of some framework. And I certainly don't code or, you know, you certainly wouldn't want me to. I think what I would say is that, often business people are pretty intimidated by that side of the business. Or think that it is unsurmountable, to start to understand. And I can tell you, I have a Photography degree in Liberal Arts, it's not unsurmountable it's really just like anything else, it takes some dedicated time to understand you know, what the words mean, and understand some structures and dig in. And often, at least in my case, I had some really wonderful colleagues who, you know, took me aside and would whiteboard things or spend half an hour with me before or after a meeting,

Cindy Moehring:

Right.

Alyssa Rochwerger:

Breaking something down. But it's absolutely critical that business leaders of all kinds, right, whether or not you sit in legal or privacy, or you know, in a line of business ownership who will understand enough of how the technology works and in order to understand how it can unlock opportunities or mitigate risks, or introduce risks, if not managed properly into what you do manage directly, right, and what you do control.

Cindy Moehring:

Yeah.

be asking questions around::

'Where's the data from?', you know, 'How is this going to be applied?,' 'What is the risk?,' 'What happens if it doesn't work?,' 'What happens if we misclassify something?.' And those types of business questions are so critical. And maybe to kind of make it relatable for a second, you know, most people interact with machine learning every day. Right? Whether or not you do a Google search, or you have Alexa or Siri in your house, I find myself arguing with Alexa, great service. I love that my engineer of a husband as hooked her up to turn on lights, and, you know, oh, we have a very connected home. But when I'm asking you to play a song for my toddler, you know, Baby Shark often doesn't understand what I've asked her. And that is an outcome of a machine learning product. Right? That's applied to me as a, you know, as a regular consumer. That is around accuracy of speech recognition. Right, right to women of my age in California, right where I live. And so whether or not the machine learning sort of system underneath took me in my situation into consideration of me wanting to play Baby Shark for my kid, right? Or was the training data something else? Right? Was it optimized for people wanting to play rap songs? Or was it optimized for people wanting to you know, check the weather, because that's what a different cohort would be. So just to circle back a little bit, the business lens on it can really help shape what are the use cases and what are the applications. And that's so critical for getting the underlying systems working.

Cindy Moehring:

Kind of helps get it, get it right on the front end, instead of you know, trying to fix it on the back end. But there's always that need for speed, particularly with tech, I think that makes, you know, that a challenge and makes roles like what you play, all the, all the more important. So what do you think are some of the most important aspects of being a successful product manager? I can imagine there are people that are going to listen to this episode and wonder, "Wow, how do I do that successfully? That sounds really cool."

Alyssa Rochwerger:

Yeah.

Cindy Moehring:

So, what are those?

Alyssa Rochwerger:

You know what I'll say is, when I'm hiring and, I'm hiring for my team right now, you know, I'm always looking for people who can do forest and tree thinking. So do you understand the really big picture of what the business outcomes are or the market ecosystem that we're playing in or the strategy that we're trying to achieve? And can you zoom into the particular tree or very small detail that we're talking about? That is going to help kind of unlock and connect dots for us to be able to see the forest so I'm always looking for people that can zoom out and zoom in and pivot between those worlds easily. It's hard to do. And it's a, it's a skill that takes practice and needs to be developed over time. And it also takes experience in a particular space often, to know what all the dots are before us, right? And to be able to kind of connect dots. I also look for people who are adaptable and can communicate really successfully between different types of humans. In a product management role, you don't manage anyone, often, you're often sort of managing stakeholders, that matrix kind of support, support you. And you need to be able to convince people to do stuff for you. And communicate the "why" really effectively. And talking to an engineer is probably a different type of communication than it is talking to designer or talking to a partner or talking, you know, to executive management. Yeah, yeah, we're good at pivoting and reading rooms and reading sort of emotional cues.

Cindy Moehring:

And you know what, that's another, that's another learned skill. That's, that's not easy to be able to communicate to several different stakeholder groups who all sort of like it's Mars talking to Venus when you're talking sometimes about, you know, HR versus the the IT folks or the computer engineers, you know, so having somebody who sits at the juncture of being able to speak to both in a way they understand is a really special skill. And I also, something I really want to put a fine point on what you said, is this ability to zoom out and see the forest but also the tree. You know, I think that's different today than it used to be when people started out in business. In that it used to be, you know, you start with your little project, and you do your little thing, and it's down here, and you don't worry so much about all the rest of it. That's that's, you know, managed by somebody else. And I think in this dispersed kind of world that we live in today in this tech world, and with things moving so quickly and disruptive technologies, it's different. You do need people who can understand, up here, which they may have thought they weren't going to need to do in their careers for another 10-15 years, right, that's going to be somebody else's problem, not theirs when they get out. But when they start, actually, they do have to be able to understand that and then apply it to the small, you know, issue down here. That's different, I think. So I'm glad you kind of brought that up.

Alyssa Rochwerger:

I can give a, maybe, a concrete example of that. Machine learning space. And I talked about this story in my book, which is, it was the first machine learning product I was launching, it was visual recognition, and IBM and it was a simple API where, you know, you gave it a picture, and it came back with a label. And the team had some of the beta product, when I showed up. And I was, I didn't understand what I was doing at all. And I was trying to learn how does this work?

Cindy Moehring:

Right, right!

Alyssa Rochwerger:

Well, how do you get these labels? How do you know the labels correct or good or accurate? And I kind of got a run around of different sort of technical people giving me different answers that, frankly, I didn't really understand. And I was kind of trying to, like, What do you mean, it's accurate? Like, how do you know? Like, why are you calling this picture a dog versus a puppy? Right? Like, which one? How do you know? And anyway, we eventually sort of settled on an answer that I could sort of barely understand. But I these were really smart people that, you know, knew a lot about this. I was like, Okay, sure. And we were about to be making a bunch of investments in accuracy. And we sort of developed a system that we thought was more accurate significantly by the metrics that we were running on the previous summer. A few days before launching and one, someone came to me and he said, we can't launch this. And I was like, What are you talking about? Like, we've invested months of work, we all agreed, it's better. And he had put a picture into the system, and gotten back the label of "Loser". And the image that he had put in was actually an image of someone in a wheelchair and it tagged it with, "Loser". And so, you know, I was like, aghast, we were all like, "Oh, my God, this is objectively horrible, right? Stop the presses." And the dot that we were connecting this to was like, IBM's, massive, like AI strategy could end up with a terrible New York Times or Wall Street Journal article about how, you know, biased and irresponsible we are and you know, terrible for building a system that, you know, objectified people with disabilities as "Loser", right. Like that does not align with the big picture values, right? It was this tiny little one label on one non-money making, you know, sort of beta product that could have really derailed a major strategy, right, for the company. We caught it. We didn't launch this. You know, we made a bunch of fixes. But that's the type of forest and tree thinking.

Cindy Moehring:

Yeah, that's a great example. And it really is because of the emergence of tech like I don't think that those, there were nearly as many issues in the past that could have affected the company at that level. When you're working on one little project down here. But when you're dealing in the tech world, and everything is so transparent, right, and dispersed, it can really have an outsized effect that we didn't used to see. So you just hit on something else that I think is super important when you're talking about responsible AI, responsible machine learning, and the skills that you need to make sure that that happens. And and as you told that story, you hit on the point of speaking up, right. And that comes up in a number of different contexts, you know, people seeing somebody doing something that they think is unethical, or, you know, in this case, somebody came to you and had the courage to say, "We can't do that, because, you know, this is the tag came back and said, loser." But speaking up is really hard, particularly for some people if you're not in a supportive environment that, you know, kind of appreciates that. And even if you're not, it's your responsibility to raise it, you have to figure out how to do it. So how have you mastered that skill? And what tips do you have for others to be able to, to use that? Because it really can stop a lot of trains from derailing.

Alyssa Rochwerger:

You know, a boss of mine told me a couple of weeks ago, he was like, "Oh, so you're fearless." And I was like, I don't think I'm fearless. But I do think there's, that's a skill I learned over time, which is, when you're speaking up, particularly in something that's controversial, or you know, might be a little sensitive or not well received. I always like to start with making sure I really understand down, right, I understand the problem. And I am sure of my facts, a little bit. So I'm on, kind of, sure footing when I'm bringing something to someone else's attention. Maybe ask a peer about it or, you know, run it by some people who are like minded, and gotten a couple of data points that confirm like, Oh, I'm on the right path here. And so I think, being sure of yourself or being confident in your understanding of the problem, or your convictions is always a good prep, if you're nervous to bring something up. And it's sort of like, No, I do understand this. And I understand it well enough. And I'm pretty sure about it. So that when you're speaking up, you are crisper in what you're communicating and you've had a little bit of practice with audiences that are not scary, right? And this can be, you can even practice, you know, with your loved ones, with your husband, people who are sort of closer to you, that can poke holes or ask questions that you might not have thought of.

Cindy Moehring:

That's right.

Alyssa Rochwerger:

So I always start with that before, I'm going to escalate something. But also bringing something up in the context of why it matters, right? So, I didn't escalate, hey, this tag came back for this image as "Loser". I brought up, hey, this matters because I'm concerned about this bigger consequence or issue. How are we going to mitigate it? I'm worried we haven't thought this through. Have you read it? And you might not have enough context, right? It might have been, oh, actually, there was a whole mitigation strategy I wasn't aware of and there were some in place for, that someone else had found out. And so it's being open to things that you might not be aware of or prepped for. And, and doing it in a way that's just unemotional, right, or simple, right? And just like, here's the data I have, here's the conclusion that it leads me to and you know, what do you think? And not being afraid of, what's the worst that can happen? They say it's a great view, or you're wrong, or you know, and then you can have that conversation.

Cindy Moehring:

I think, so those are all incredibly good points. And and it takes time to learn that skill you very eloquently described, and there are a lot of lessons and nuggets in what you just said. But I think a lot of people's fear is 'oh my god, people are gonna think that's a dumb question. I can't ask that in front of others.' Have you ever felt that way? Or have others said that to you? Or what happens when you raise a question then? What's the sidebar conversation after the meeting?

Alyssa Rochwerger:

Yeah, I um, I know, that's a fear a lot of people have. And I think, you know, it's super valid. It comes from perhaps being outside of your comfort zone, a little bit, in a conversation. The old CEO of IBM, Ginni Rometty, she used to say over and over again, "Growth and comfort never coexist." And so to be outside your comfort zone is actually a positive thing, a little bit, means that you're growing. And so I would I try to reframe those situations on, not as you're asking a dumb question, but you're growing a little bit, you're outside your comfort zone and you need tolearn. And so, you know, sometimes taking notes for yourself in a meeting and you know, pinging someone in the background or going to your boss afterwards, or having a little to things you didn't understand, if you don't want to say it in a meeting of large context, I think that's okay. Right. I also think you can disarm things a little bit say, "Oh, maybe this is a dumb question," and that kind of provides the space a little bit for people to, you know, you to be in a learn mode. Yeah, nine times out of 10 other people in the meeting have the same question.

Cindy Moehring:

I know!

Alyssa Rochwerger:

Right?

Cindy Moehring:

That's what I've found!

Alyssa Rochwerger:

Those little ways of catching yourself can put you at ease that people are gonna say, "Oh, that's a stupid question." But most of the time, someone else has the same question too, because it isn't well explained.

Cindy Moehring:

Right, right, right, right. And you walk out of the meeting, and more times than not, somebody leans over and says to you, or catches your arm and says, you know, or pings you online afterwards, "I'm so glad you asked that. I had the same question. I didn't understand that either. I'm so glad..."

Alyssa Rochwerger:

Mostly, that's what happens.

Cindy Moehring:

I know! Mostly that's what happens. So let's, let's talk about some more specific examples of how ethics can come into play when an organization is trying to roll out AI machine learning and trying to do that in a in a responsible way. And there's seems like there's just tons of examples. Some, I think, are riskier than others. And I want to get your opinion on that. And in terms of kind of the spaces they play in, but can you share with us your take on some of the recent examples that came to light like Apple credit card? You know, that's one that I think a lot of people have heard about was certainly on my mind. In listening to you talk about, you know, bias with your example, earlier, at Watson. Bias was essentially the issue that came out in this one too. But what do you think really happened there? And you know, is ethics different from legal when you think about it, and look at that, look at it that way? And is right and wrong? How do you really figure that out?

Alyssa Rochwerger:

Yeah, sure. So for people who aren't super familiar with the Apple credit card situation, I'll provide a 2-second overview of that. So Apple and Goldman Sachs several years ago, they got together saying, "Hey, we're gonna, you know, release this new credit card." And very quickly after launch, some sort of famous folks started tweeting, "Hey, you know, why is my wife getting approved for far less than I'm getting approved for when we share finances?" And that kind of created a bit of a storm and an investigation into it actually, from a sort of regulatory perspective of were they administering credit unfairly? Were they allowing certain people, you know, women or men to be unfairly advantaged or disadvantaged? And the sort of what happened there, right, was that the model was created behind the scenes with if you look back at, I don't know exactly how many years of data but you know, credit history. And in that case, actually, they removed gender from what was being looked at. And so, you know, they had a lot of different other attributes that they looked at, in order to say, hey, here's the credit worthiness that we think, you know, we're gonna award you that you can handle. And they removed gender, but what they like, if you look back at credit card history, in the United States, it's biased. Men are, have far greater ability to spend credit than women. And there's a much more recent history of women having significant spending power. And so not surprisingly, there was bias in the data, even if you removed gender as a field itself, right. So by the jobs that people had, or titles that they had, or spending histories, or lots of other sort of resulting attributes that were different between male and female, if you look at just history. And so the, the team actually, in that case, when they launched, they hadn't controlled for it and so they actually hadn't tested for either. So they didn't know off the bat that men or women were being unfairly advantaged or not. And when they look into it, actually, it was a long story just came out more recently that there was an actual sort of violation of Fair Credit laws or standards. But it doesn't mean that it wasn't ethical or good business. Right. Total negative, like black stain for Apple and Goldman Sachs for a while, which was, it could have been a really like positive watch, and like a new financial, you know, card and opportunity for the companies. And meanwhile, there was all this negative press and like regulatory investigation, which is not what they needed. And they couldn't explain, easily, the algorithm,

Cindy Moehring:

Right, and that was their issue.

Alyssa Rochwerger:

There wasn't transparency into it. That actually was sort of the missing business opportunity that as a business person, you can say, hey, how does this model work? Can we explain it, right, that's a critical piece of going to market. And I think that is one of those ways that business folks can easily add value by like really poking saying, okay, but, you know, explain this to me. And don't explain it to me using math, explain it to me in human terms that I can understand. And are there charts or graphs or ways that we can actively look at and measure the things I care about, right? Like gender balance, or racial balance, or, you know, other things that are probably interesting to you, it doesn't have to be regulated in order for that to be good practice.

Cindy Moehring:

Yeah, so that might be a space where you know, folks from HR or somebody from, you know, legal or somebody from, you know, a compliance organization may raise those questions, you know, back to the product developers and help them actually think about it and test it and try things out on the front end, as opposed to on the back end. Because, you're right, at the end of the day, the company's lost trust. And that's what it's all about, once you lose trust, it's really hard, sometimes impossible to ever get it back.

Alyssa Rochwerger:

Right. Like my little thing was, you know, going back to the story about the "Loser" tag, right, like, was that, you know, was that going to create a legal issue for IBM? Maybe, you know, probably not. Right. But it was definitely going to create a press problem, and it was definitely going to be a trust problem, it definitely going to, kind of, create, sort of, an ethical issue. And just because it's legal, doesn't mean it's, you know, good or okay, or good business.

Cindy Moehring:

Yeah, yeah, you really have to, that puts a fine point and draws a distinction between kind of legal and ethics and using things responsibly. But let me ask you this, Alyssa, do you think that all applications of machine learning or artificial intelligence are equal on kind of the risk scale? I mean, what comes to my mind is, you know, things like targeted advertising, like I, you know, I live in the middle of the US and the mid South, and so I'm probably not going to need a parka, like somebody in Alaska would need. But the only way that those, assuming I allow myself to be targeted for advertising, if I say yes, does that, like does that create some kind of risk in your mind? Or how should somebody really think about that?

Alyssa Rochwerger:

Not at all, there's a huge spectrum of risk. Right. And so, you know, advertising fairly low on the risk spectrum. You know, there's definitely risk there. But you know, it's not zero, but it's fairly low. Yeah, things like health care applications, really high risk, things like legal applications, military applications, financial applications. You know, so, you know, I'm in the healthcare space, there was a study, actually, that was came out recently, last year and applied AI for health care, Harvard did a sort of business case about this. They wrote a paper and, you know, basically, what they found is that protocols that are currently in place for scoring people with cardiovascular risk, are based on 80% data from folks who are Caucasian. And so it doesn't actually apply as successfully to people who are not white. And so there's sort of active bias going on in the healthcare space right now, when you were scoring people for cardiovascular risk, because the data that's being used to score is from white people, right? And doesn't apply equally to Asian people, or African American people or Latinos. And so that's the type of application that, to me is much more damaging and much riskier, right, because literally someone is, you know, at risk of a heart attack, right. Supporting them and giving, administering medical care, based on biased data, right, that's going to perform well for white people, but not black people.

Cindy Moehring:

Yeah. And I would imagine, that could bleed over into the insurance space, too, because models are generated for you know, how much to charge somebody for insurance based on what they think their risks are or not. And it would just get bad.

Alyssa Rochwerger:

Yeah, luckily the Affordable Care Act makes that illegal to do that anymore. But you know, you can't price insurance anymore based on pre-existing conditions. But there are, there's real challenges in the healthcare space around administering that. There's also real challenges in like hiring. Amazon got into some hot water several years ago around they were scoring resumes, and they sort of realized that men were being much more favorably passed through and scored, than women were, as a simple example. There's also issues with being applied in the legal space. So there was a propublica expose on a company called Compass that has an algorithm for recidivism. So it's scoring people on how likely are they to reoffend and it's just the report showed that it was likely to score someone who was black, at a rate of reoffending, at a much higher rate than someone who was given sort of equal data and backgrounds around what their history was. Applications like that I think are pretty damaging and very unethical. And the legal sort of framework and regulatory system hasn't caught up yet to this sophistication of this technology. So we're way out ahead of the legal framework here from a technology perspective.

would put into that bucket::

Well know the risk, kind of area that you're dealing in? Is it high risk? Or is it low risk, and kind of keep that in mind? Make sure you have disparate people at the table so that they have differing opinions. But then, you know, also, you've got to really understand not just the area that you're dealing in, but what is going to kind of be the specific application for it kind of slow down, think about it, do it on the front end, what else? What are some of the other lessons?

Alyssa Rochwerger:

Yeah, all of that. And, you know, the book really goes through, it's meant to kind of be an answer to that question, right, which is, how do I do this successfully? How do I do this well? And we sort of break it down in different chapters. Team is one absolutely. We spend a lot of time in the book talking about the importance of data and the provenance of the data and understanding, where's the training data that is building this model coming from? And does it reflect the use case that you're applying to? Right? Yeah, I'll take voice recognition system like Alexa. A lot of the data that was used to train modern speech recognition systems was taken actually from government data from the 1980s, of male newscasters reading the news aloud. And that was sort of the early development of speech recognition technology. That has nothing to do with me in my kitchen, asking her to play Baby Shark. I'm using different words, the acoustic environment is super different, right? That's a very different set of data. And the training data doesn't reflect that. And so that's an example of the disconnect that the business folks can really bring to the table of, hey, here's how we're going to apply this in the real world. Here, are the applications, here are the stakeholders, here's who's gonna consume it. Does the data match that? And matching, if you just do that, and just match the the application and the data you can get really far in terms of being a balanced and fair application.

Cindy Moehring:

Yeah, that's another really important point about the role that I think product managers play in making sure that it gets looked at that way. But let me ask you a more question on that for a product manager perspective. Once you launch your product, and your product is successfully launched, is your job done there, in your mind as a product manager?

Alyssa Rochwerger:

No. Pesky thing about product management is actually, you know, that's the hard part, which is that you actually have to have a sort of successful management and understanding how it's going, right? Do people like your product? Do they want to share that with someone else? Do they want to buy more of it? So, you know, there's a lot of different kinds of products. The goals of any product are sort of different, but launching with an ability to measure how it's going is so key. YouTube actually does this really well. So sort of a narrow example, is around sort of YouTube's algorithm for how do they take down content that is unsavory, right? So how do they remove content that is inappropriate from either a pornographic perspective or, you know, violence perspective, or whatever? And they actually publish it. It's a transparency report, you get online and Google 'YouTube Transparency Report'. And they, they reflect back here is what we take down. Here's why we take it down. And here's who flagged it to take down. And so the vast majority is actually flagged by their internal algorithms. But there's actually another portion that's flagged by users saying, Ooh, like, this content is bad, right? And then there's a very small portion that's flagged by regulatory or government folks. And they reflected back and they ask all the time for feedback around, how is this going, right? And so they over the last 10-15 years have built up a very sophisticated way of doing that and are constantly updating their model because what is violent today is not violent tomorrow, right? And the content shifts, and that's the thing that's different about machine learning than anything else is that the content itself, it's alive, it moves, you know, what was true today is not true tomorrow. You know, simple language changes and evolves. And as a product manager, you have to have a framework for taking advantage of that and for adapting to the way that culture and language shifts.

oehringSo now the lesson is::

It's a lifecycle. It isn't getting a product to launch and then you move on to launch the next product. But it's a whole lifecycle.

Alyssa Rochwerger:

Yeah. Yeah.

Cindy Moehring:

Interesting. Alyssa, this has been a great conversation. Thank you so much for sharing your time with us and, and your thoughts about product management in this ever evolving space and how to implement it responsibly. And I always like to leave the audience though with one last nugget, where do you go for additional information that you have on this topic that you would recommend to the audience are there good, maybe a podcast series, or a good another book that you've read or anything like that, that you could recommend?

Alyssa Rochwerger:

Yeah, absolutely. I'll recommend two things. So on the product management side of it, if you're looking to do product management, I always recommend Marty Cagan wrote a book called "How to Build Products Customers Love." I think it's an awesome entry point into product management and how to get good at that. On the machine learning sort of bias side, there's a recent movie that came out on Netflix called Coded Bias, highly recommend it, it's a really approachable way to understand the impact of bias.

Cindy Moehring:

Yeah, there is. I've watched that one recently. And it's fascinating knowing the story behind the story, and how the whole bias issue kind of came to light really through through a student at MIT's experience on a project so great recommendation. That was good. Thanks, Alyssa. This has been a wonderful conversation. I appreciate you spending some time with us today.

Alyssa Rochwerger:

Thanks for having me.

Cindy Moehring:

All right. Bye-Bye.

Follow

Links

Chapters

Video

More from YouTube