So your name’s been mistreated by autocorrect. What harm does that cause? And what would it take to fix it?
In this episode, Northeastern University law professor Rashmi Dyal-Chand discusses her research into autocorrect's bias and shares her blueprint for change - from what consumers can do to where the law might need to step in.
Plus: journalist Dhruti Shah on her viral 2018 BBC article that first brought the issue to light.
This is Part 3 of 'What's in a Name,' our mini-series about autocorrect and inclusive technology.
--
New to the series? Start with Part 1 and Part 2
Enjoying the show? Leave a rating to help others discover it, or share your autocorrect story at madeforuspod@gmail.com
--
About Rashmi Dyal-Chand
Rashmi Dyal-Chand is a law professor at Northeastern University. Her research and teaching focus on property law, poverty, economic development and consumer law. She is the author of the article, “Autocorrecting for Whiteness”, published in the Boston University Law Review in 2021.
Learn more about Rashmi Dyal-Chand: https://law.northeastern.edu/faculty/dyal-chand/
Read the “Autocorrecting for Whiteness” article: https://www.bu.edu/bulawreview/files/2021/03/DYAL-CHAND.pdf
About Dhruti Shah
Dhruti Shah is a creative practitioner, storyteller and journalist who focuses extensively on belonging. She is a collaborator with I Am Not A Typo.
Read Dhruti’s article: https://www.bbc.co.uk/news/business-46362259
Follow Dhruti on LinkedIn: https://www.linkedin.com/in/dhrutishahstoryteller/
Follow Dhruti on Instagram: https://www.instagram.com/dhruti_journo/
--
Connect with Made for Us
Show notes and transcripts: https://made-for-us.captivate.fm/
Newsletter: https://madeforuspodcast.beehiiv.com/
Perhaps those who have developed autocorrect technology might look at those harms and essentially do a cost-benefit analysis and say, well, yes, it's annoying to some people and that is a harm, but does that harm outweigh the convenience?
TS:You're listening to Made For Us. I'm Tosin Sulaiman. This is part three of What's In A Name. We've heard from people whose names are treated as typos, and we followed the campaign pushing tech companies to expand their definition of what's correct. In this episode, we step back and ask the bigger questions. Why does autocorrect treat some names as errors? What harm does that actually cause? And what can be done to make the technology more inclusive?
stories exposing the issue in:First, here's my conversation with Professor Dyal-Chand.
RDC:My name is Rashmi Dyal-Chand and I am a professor of law and also an affiliate professor of public policy and urban affairs at Northeastern University in Boston, Massachusetts.
TS:I'd love to start with your name. What's the story behind it?
RDC:My first name is Rashmi. It means ray of sun. I'm South Asian, I'm Indian by ethnicity and by heritage. so the name is one that was given to me by my parents to signify that particular meaning. And I will say also that the meaning of the name is important to both me and my family as a generational matter. And we have also named my son a name that means the same thing as mine. So it's an important name from that perspective, both of signifying ethnicity, certainly, and national origin, but also of signifying generational connection.
TS:And what was it like growing up with that name, both in your offline life and later on in digital spaces?
RDC:So the answer to the question, what was it like growing up with that name depends on, of course, where I was. In India, the name Rashmi is a very common name. And I grew up partly in India as a young child. So when I was there, it felt like I had a name that was perfectly common and well recognizable. When I came to the US, it was the source of meaningful frustration for me, not least of all because my name was regularly mispronounced, but also it's a name that was the subject for me anyway of teasing. People just didn't necessarily understand the name. I, as a child, at times had a love-hate relationship with my name, but as I grew, certainly I felt pride and a connection through having that name.
TS:So you're a law professor, could you give us an overview of your research?
RDC:The core of my research is in property law, also in consumer law and in economic development, including local economic development and comparative local economic development.
TS:The paper that we're going to be talking about today is about autocorrect and algorithms. How do these topics fit into your work?
RDC:To be honest with you, this particular topic and this article are a result of a completely different process for choosing the topics on which I write than I've ever used before or since. And the short answer is that I was so disgusted by the way in which my name was autocorrected, but also by the repeated experience of writing names in emails that I was sending to other people. And at the point when I hit send, and only at that point, discovering that their name had been auto corrected. And I felt so regularly a sense of embarrassment, even shame, and real just frustration and even disgust at having that, that auto correction represent my goodwill in writing to people whose names were not what I would describe as Anglo, really.
So it was really a sense of just disgust being fed up, if you will, that inspired me to write this article and to really begin to research what is the deal with autocorrect technology and what, and this is really important to me because I'm a law professor, what might the forms of legal redress for this kind of of harm or set of harms be. But certainly also I started to talk about it and just share my experience. And so many people responded in those conversations with strong emotions, expressions of disgust, of frustration, really reflecting many of the things that I myself had experienced. in that respect, I had a sense that this was not only a relatively widespread
issue at the time of having names autocorrected, but also a widespread experience of those with whom I spoke and with whom I corresponded of feeling more than de minimis harm.
RDC:It's one thing to have your name autocorrected and to feel, that's kind of funny, or even that's inconvenient, or even that it's annoying, those are harms, if you will. But some, and perhaps those who have developed autocorrect technology might look at those harms and essentially do a cost benefit analysis and say, well, you know, yes, it's annoying to some people and that is a harm, but does that harm outweigh the benefit of convenience that we claim this technology provides and being able to type and misspell, mistype, but still have your typing autocorrected. So the point that I'm making when I describe the harms of people reflected back to me as themselves experiencing is that these were more than just annoyance, inconvenience at having to correct, autocorrect, but harms that went deeper. I would describe them as dignitarian. I would describe some of them as economic or real impact on people's use of the technology that affected their ability to get a job or their ability to connect with someone in a professional manner. So this range of harms that I think that I was beginning to uncover in these conversations really reflected a much broader range of harms that included harms that were quite meaningful, were quite deep.
TS:Do you have any specific examples of these harms that people were telling you about?
RDC:To send an email to someone else in a professional setting and to misspell their name and to have that be the perception that the person received that you didn't care enough that you weren't careful enough or professional enough to check the spelling of their name. It was something that was regularly expressed by people and again perceived as a real harm that people were judged in that way and later it was described to them. You should be more careful. I'm not hiring you or I'm not choosing you for this particular task or this position because you are not careful in proofreading your work. When what I learned later and what they only learned later was that it was not a lack of care. It was literally the way that the technology was set up at the time that it was impossible to correct or even notice that the auto correction had happened until after the email or correspondence had been sent.
TS:And when you started to look into this, what did the academic research around autocorrect look like?
RDC:I did not see much research at all on, this article was published several years ago, right? So I did not see much research at all on autocorrect. I don't think I recall a single legal article, which is again, my area and what I was focused on. But I did find certainly a number of articles written by both folks in the tech world expressing concern about auto-corrects overcorrection, as well as a little bit of sociological research on this. And so it was new. It was a form of algorithmic bias that at the time I was writing about it was just sort of at the frontier of recognition as algorithmic bias.
TS:So let's talk about the article. So it was published in 2021. And I'm curious how you actually went about the testing. Can you take us inside your thought process?
RDC:So the purpose of the article was really to explore the question of what legal remedies there might be for the range of harms that I was hearing described and that I was myself experiencing. So the purpose of the testing was just in meaningful respects descriptive of what I perceived to be the scope of the problem from a consumer's perspective, rather than say from the perspective of a tech person trying to fix the problem. And also thereby to generate a set of questions, to provide enough of a proof of concept, if you will, to generate questions that could be addressed in future research. So the testing that I did was to work with my research assistants first to identify a range of popular, quote unquote, ethnic names in the United States. So I was focusing just on researching this phenomenon empirically in the United States. And I literally went to a number of different lists of most popular baby names from the years 2014 and 2016, partly because they were easy to find in a number of different places geographically in the United States. So specifically New York, Georgia, and Texas. And I wanted to choose popular, quote unquote, ethnic names because I would have assumed that if any names were going to be auto-corrected that were non-Anglo, that these more popular non-Anglo names would be the ones that would not be auto-corrected.
RDC:I worked with my research assistants literally just to plug these names into a range of devices, both Apple and PC, computers and operating systems that included Apple, Microsoft, Android, applications that included Microsoft Word, Notes, iMessaging, Gmail and WhatsApp. And then my RAs and I not only recorded of course what particular device and operating systems we were testing, but also what happened when we plugged that set of 92 names in one at a time into the devices. And the answers were essentially all over the place, right? All kinds of different things happened, including names that we sort of subjectively perceived as equally non-Anglo being auto-corrected in different ways by exactly the same system and device.
Here are a couple of examples from testing the iPhone 6 at the time and iMessaging and the name Jackson, J-A-X-O-N was one of the names that we plugged in. The iMessaging function would just auto correct the name J-A-X-O-N as soon as the person inputting that name hit the space bar and the name was automatically auto corrected either to Jacob or to Jason. Here's another one. Aarav, A-A-R-A-V. Again, just hitting the space bar changed it to either Aardvark or Arachnophobia.
Now, I'm sure the technology has progressed and improved since then, even that particular technology, but at the time, those are two examples of it.
TS:So just to play devil's advocate here, some people might say, so what? It's just auto correct. Add your name to the dictionary and move on. What's your response?
RDC:The addition of the name to the dictionary and the ease with which that can be done now is a powerful response in the sense that it arguably is a convenient way to correct autocorrect moving forward, right? And it begins to reduce the level of inconvenience and so on. But one of the things that I actually respond to this in the article itself, and my response is that there still remains, let's put aside the economic harm for now, there may still be economic harms that people experience even when the technology is so easily corrected, right? But let's focus for now on the dignitarian harms. There still remains a meaningful set of dignitarian harms, community-focused harms, on essentially categorizing a group of names that have to be corrected in this way before, even if they can be corrected conveniently, before they are recognized as quote unquote normal or at least perceptible by the technology that's used.
RDC:This is where the foundational nature of autocorrect technology, the nature of autocorrect technology that I would describe almost as a public accommodation is something that everyone needs to carry out the functions, the important function here being of communication that they need to be able to carry out in life. And to imagine that that technology is unequally accessible and usable, even if perceptibly only marginally so, is nevertheless a harm, right? It's a harm that has a disparate impact, both from a legal perspective, but also from a pragmatic perspective, a disparate impact on a definable group of people.
TS:So it's essentially about equal access to the products that we all rely on for communication and to participate in society.
RDC:You're absolutely right that it's about equal access and the corresponding harm, right, is that unequal access is a statement about equity. It's a statement about belonging. It's a statement about the ways in which people are presumed to be able to participate in a collective society, in a given society and the steps that they have to take to be able to do so.
TS:Tech companies would probably say that they're optimizing for efficiency. They're giving most users at least what they want quickly. What do you think about that logic?
RDC:Well, it's illogical in the sense that it does not apply to most users anymore. Right? There are many, many, many users now of technology and of autocorrect technology who are non Anglo as, as defined in that narrow way by defining the names that fit within that set of names that are not auto corrected. And then the names that do not fit within that set. So logically, I would say the logic doesn't bear out anymore.
DS:My name's Dhruti Shah and I am a freelance journalist and creative practitioner. So my name Dhruti, I love it. I actually love it now, but that's not to say that was the case when I was younger because I didn't really know any Dhrutis. I thought, why have I got this strange name that none of my friends have? So my dad, when I was younger, and I'd say, what does my name mean in English? And he said it meant North Star. But then again, if you go to other spaces, it's a derivative of the goddess Durga who's quite a powerful figure in Hindu mythology. It's a different name, it's unique. The other thing is there's a challenge with it in that a lot of English people can't necessarily roll their Rs. So then they're like, Dhruti. So it's very Anglicised in that respect. And then the age of the internet came and that in itself had its additional challenges. People would email me and they would spell my name wrong.
I did a variety of stories at the BBC. I worked in lots of different departments and I was actually in the business unit when this happened. I got a message one day which said, hi, E, isn't it funny? Did you know that your name autocorrects to Dorito? And then how do you respond to that sort of message and say, no, I don't think it's funny. And then I thought there must be something in this, this small thing actually it's reflective of something much bigger. At that point I was responsible for the BBC News LinkedIn account and I actually put a call out and quite a few people got in touch.
DS:And some of the stories were absolutely amazing. There was one guy called Nana who was getting called Nando's, a woman called Montserrat who was misnamed Monster Rat. And again, this was going beyond culture, race, heritage. pretty much everywhere, everybody had a story, whether it's auto correct, whether it's in the office. So, you you're seeing one thing being able to be expanded out to a much bigger, wider problem around identity.
I mean, for me, it is about the ethics of it because it is about bias. Like, why are certain names considered the wrong name? Why are names typos at all? What is a typo? Who decides what is a typo? Why is it that a kid has to write their name and then see it as if it's something wrong? And their friend is not wrong because their friend has a more common name. And therefore that's the one that's been put into the dictionaries.
TS:So you were saying that autocorrect isn't just about autocorrect, it's a window into algorithmic bias. So what does autocorrect tell us about this bigger problem?
RDC:AT the time, I felt very strongly that autocorrect was such an important example of algorithmic bias because, I'm returning again to the question of remedies, because the answers that we need to generate to address bias in this kind of technology may very well be different answers than those we might generate, for example, in addressing algorithmic bias in criminal sentencing. When you look at a particular technology and a particular algorithm and the bias that may inherit it, the purpose of that algorithm absolutely should be relevant to the remedy that you prioritize. And I think autocorrect was such an interesting example, especially in that day, but still today, it raises that set of questions and really opens up that box for us to think about. And one of the things that I really raised in the article was the extent to which such a technology should be owned as intellectual property, as a trade secret, any longer, given the very real possibility that by essentially democratizing the production of that technology, we can much more easily eliminate the racial and other biases that inherit it.
RDC:I would be much more interested in a situation like autocorrect, to think about ways in which to make the technology accessible, modifiable, and usable both by consumers but also by producers of the technology in such a way as to make it less discriminatory in its impact. So let's just imagine that we re-envisioned autocorrect and access to the dictionaries in particular that underlie the autocorrect technology, the algorithms, we required, or tech companies agreed to, incorporate dictionaries and words into their dictionaries that were provided by regular consumers of autocorrect technology. And in that respect, we allowed consumers to function as producers as well. So the whole production of the dictionaries that underlie the autocorrect technology would be a powerful way in which to democratize the technology, but also in which to obviously correct the harm that's being imposed when autocorrect corrects non-Anglo names.
TS:I want to come back to what your proposals are for tech companies, how they can remedy this. But I wanted to also talk about consumers and what they can do. mean, say I'm listening to this, I'm frustrated with autocorrect, I want to do something, what's actually in my power?
RDC:And that's an excellent question. And the painful answer is not as much as I wish there were. And the reason is because as consumers, our buying power is one of our most important forms of activism. And so we can certainly try to buy products that are less invasive in this respect. And we can certainly look for products, for example, that autocorrect less.
And then perhaps most importantly, we can be activist about it by sharing our experiences with the product on any range of online and other fora. One of the reasons that I was very interested in the I am not a typo campaign was because that is essentially a campaign that takes the perspective of the consumer and amplifies the voices of consumers in being activist in that way.
TS:And that campaign really struck a nerve, thousands of people sharing their experiences, what did that tell you?
RDC:It really felt affirming. You know, it was another brilliant moment of empirical validation about the extent to which this is a problem that's felt deeply by a range of people with a range of names.
TS:So if we could go back to the tech companies, they would call this unintended harm. If you were in the room with the decision makers at these companies, what would you advise them to do?
RDC:So in my article, which as you mentioned was published several years ago, I gave a range of suggestions. Some of them may have been adopted and I think have been. Others of them sort of have been rendered irrelevant by the state of the technology. And I spent only a little time in the article on that range of suggestions for precisely that reason. In some respects, I would take the perspective of the consumer and say the answer as a technological matter is find the thing that maximizes convenience while prioritizing equity of utility of the technology over convenience. So the first priority ought to be maintaining equal access along a diverse range of demographics. And then the second should be to maximize convenience.
TS:So we've talked about what individuals and companies can do. What about law and policy? Where do you think the law needs to step in, if at all?
RDC:And this was really the key focus for me when I was writing that article. I strongly feel that a technology like autocorrect is a modern form of public accommodation. that term, public accommodation, is a term of art in the United States, which refers to a business that is open to the public. So what is the business here? It is a technological system that is so ubiquitously used that really anybody who is doing word processing or using a device such as a phone or a range of apps would need to use in order to use that particular device. So once we then acknowledge that something like a technology like auto-correct technology functions essentially as a public accommodation, a set of laws comes into relevance. They become relevant for purposes of thinking about the legal remedy.
RDC:And the kinds of remedies that come to my mind are laws that require that public accommodations be equally accessible to all members of either specifically defined classes - so racial minorities and others, those who are defined as protected classes or minorities on the basis of ethnicity or national origin in addition to race. But I would argue actually that we very likely should be thinking about how to address a much broader range of what I would describe as arbitrary exclusions. So, all kinds of linguistic turns of phrase, even what one might describe as inimitable forms of speech would be or should be, in my opinion, open to somehow easily either opting out of autocorrect or not even being captured within the range of things that are autocorrected. And again, that can include modes of speaking that are regularly corrected not just by autocorrect technology but also, for example, by autofill kinds of technologies.
TS:So one of the big tech companies, Microsoft, has addressed the issue. What did you think when you heard the news?
RDC:I will say, first of all, I'm so delighted, but also it's really about time. It's wonderful and I don't want to minimize the importance of such a change. would make a meaningful difference in people's lives. And at the same time, the law professor in me would say the change took too long to come and we need more than one tech company to make that change, certainly.
TS:Rashmi, thank you so much for joining me on the show. It's been a pleasure speaking to you.
RDC:Truly my pleasure. Thank you.
TS:That's the final episode of What's in a Name, for now. We've reached out to the tech companies at the centre of this story. If they respond, you'll be the first to hear.
If this series resonated with you, text it to a friend and leave a rating to help others find it. And if you have an autocorrect story or thoughts on what we discussed, email us at madeforuspod (at) gmail.com. Season 3 of Made for Us launches early next year. Make sure you're subscribed and you can also follow us on social media or sign up for our newsletter to stay in the loop. Links are in the show notes. Thanks for listening.