In this episode, we explore the often-overlooked disability data gap in AI, and why it matters for equitable hiring.
Ariana Aboulafia, who leads the Disability Rights in Technology Policy Project at the Center for Democracy & Technology (CDT), joins us to share insights on designing more inclusive algorithmic systems and creating datasets that are more representative of disability.
Whether you’re building AI, hiring talent, or advocating for accessibility, this episode is a great starting point for understanding how to reduce disability bias in technology.
In the conversation, we explore:
Missed last week's episode? Would Stephen Hawking get hired today? The hiden bias in AI recruiting tools
---
About Ariana Aboulafia
Ariana Aboulafia leads the Disability Rights in Technology Policy Project at the Center for Democracy & Technology, which focuses on the ways in which certain technologies impact disabled people.
An attorney with a strong background in public interest advocacy, and with particular expertise in disability, technology, criminal law, and the First Amendment, Ariana has also worked as a public defender.
Learn more about Ariana: https://cdt.org/staff/ariana-aboulafia/
Follow Ariana on LinkedIn: https://www.linkedin.com/in/arianaaboulafia/
Follow Ariana on Twitter: https://twitter.com/ArianaAboulafia
Read the disability data report: https://cdt.org/wp-content/uploads/2024/07/2024-07-23-Data-Disability-report-final.pdf
---
Connect with Made for Us
The idea that folks have that things that come from math or science or technology or computers are disproportionately likely to be correct, that's a human bias also. And so the idea of using these algorithmic systems as a way to remove human bias, I don't think we're removing human bias. I think what we might be doing is replacing one sort of bias for the other.
TS:Welcome to Made For Us, the show where we explore how intentional design can help build a world that works better for everyone. I'm your host, Tosin Sulaiman. Today we're picking up where we left off last week, looking at AI recruiting tools and what happens when they're not designed with everyone in mind.
Ariana Aboulafia from CDT, the Center for Democracy and Technology, joins me to take a closer look at the disability data gap which helps explain why AI hiring tools and other algorithmic systems sometimes create biased outcomes for people with disabilities. Arianna leads the Disability Rights in Technology Policy project at CDT, and she co-authored a paper last year that looks at how to design algorithmic systems that are more inclusive and less biased towards people with disabilities. As you'll learn, there isn't a huge body of research on this topic, and Arianna and her colleagues are hoping to fill that gap.
I'm pretty sure you'll learn something new from this conversation. I know I did. And if you missed last week's episode with Susan Scott Parker, you should definitely check that out as well. Now let's hear from Ariana.
AA:My name is Ariana Aboulafia. I am the Policy Counsel for Disability Rights and Technology Policy at the Center for Democracy and Technology, which is based in Washington, DC and also has an office in Belgium.
TS:Also known as CDT. So tell us more about your career journey and how you got involved in disability rights and the impact of AI on people with disabilities.
AA:Sure, so I am disabled, I've been disabled my entire life and I got involved in disability advocacy just as self-advocacy really. That was kind of how it began and it began I think as a lot of self-advocacy with disability begins which is centered on accommodations, interacting with spaces that were just not as accessible as they needed to be and sort of finding my way as a self-advocate and you know, I eventually went to law school and I went to law school with the idea of becoming a civil rights attorney. And then when I was in law school, I took a course that was taught by a professor whose name is Dr. Mary Ann Franks and she's a renowned expert on technology and technology facilitated gender based bias and violence. And that course kind of structured technology as a civil rights issue.
It was the first time that I thought about the ways in which technology could overlap with those interests that I already had, meaning basically civil rights and social justice. And by the time I took that course, I had already decided that I was going to be a public defender. And so I was a public defender for just about two years and I became a public defender and I wanted to be a public defender because I wanted to help people with disabilities that are in the criminal justice system.
AA:People with disabilities in the United States, at least are disproportionately incarcerated and the criminal legal system, particularly again, in the US, is a very difficult place to be, and it's particularly a difficult place to be as an individual with a disability. And so I did that for a couple of years, and then I was able to sort of find my way back to doing disability rights work in a bit more of a direct way. And that's what I do at CDT. And CDT is a rights-based organization that focuses on civil rights and civil liberties in the context of technology and technology policy. So my work focuses on basically providing a disability rights and disability justice lens to all of that work.
TS:So we're going to focus on AI or algorithmic hiring tools in this conversation. Most of us will have come across them in some form when applying for jobs, but maybe we can start with defining what they are and the ways they're used in the hiring process. And I know that CDT has done some research on this.
AA:Yes, so we absolutely have. hiring tools, the first thing that I would say is that we don't know precisely how prevalent these tools are. There is not mandated transparency requirements, right, such that there would be disclosure of every organization that uses these tools, but it is most likely that they are being used in a very widespread manner. I think that's fair to say.
As far as what hiring tools are, they can vary. As a general definition, hiring tools are technologies that are used during hiring process. Oftentimes, they incorporate some sort of algorithmic system that can help an employer, or least theoretically help an employer, to sort through applicants and decide generally who's going to move on to the next step of a hiring or an interview process.
AA:Some examples of hiring tools and they can vary, right? But one example I talk about a lot is resume screening algorithm tools. So these are tools that let's say there are resumes that are submitted as part of a job application, which I think is fair to say is fairly standard. They may have certain keywords that they're looking for, or they also may have certain things that they are not looking for. Those sorts of inputs are put into the algorithmic system. And then the output is that it screens out certain resumes and then it also sends certain resumes, let's say, to whatever the next step of the hiring process. One of the ways in which this can impact people with disabilities is let's say, and I tend to use myself as examples whenever I can, partially because I think it's really powerful to speak from lived experience, but also partially because there are just as many ways to be disabled as there are people with disabilities. And people with disabilities interact with tech and with other systems and in different ways.
AA:And so I do try to speak for my experience when I can. so, you know, I have a disability that at one point required me to take one year where I was not in school and also not able to have a full-time job, which is not horribly unusual. I am not the only disabled person who has ever had that experience, I'm sure. Now, theoretically, and to be clear, I'm not saying this happened to me, but theoretically, there are resume screening algorithms that could, let's say, screen out anyone that has six months or higher of a unexplained, you know, non-employed or non-in-school period. And even if that period, let's say, is due to a disability, that resume screening algorithm could still screen out that person. So that's an example of a hiring tool that could potentially have an impact on a person with a disability.
And one of the concerns is that that person with a disability, that applicant may not even know that that tool is being used. If they know a tool is being used, they may not know exactly how it may impact them. And they also may not know necessarily to ask for an accommodation at that stage of the hiring process if they don't know these tools are being used.
AA:So another example of a hiring tool that's that is sometimes used are tools that monitor measure eye contact or vocal cadence. And so these sorts of tools tend to be embedded into like video interviews. And so let's say while someone is giving a video interview, there is an algorithmic tool running that is monitoring, let's say their vocal cadence or their eye contact. And then again, sort of engaging in that screening process. And one of the concerns with that is there may be folks who are neurodivergent or folks who are blind or low vision who may just have, let's say, eye movement or eye contact or vocal cadences that are outside of what an algorithmic system determines is within their pattern that they are looking for. And then again, what you're gonna have is people with disabilities that are screened out on the basis of their disability, which at least in the United States with the Americans with Disabilities Act, that's not really supposed to happen.
AA:But again, if folks don't know that these tools are being used, it's really difficult to ask for the interactive accommodation process, even when you may be entitled to it under the ADA. So there are accessibility concerns, there are screening concerns, and really what we, know, the sort of larger concern here is that employment is one of the most important things when you're talking about social mobility, but also just independence. And it's really important that folks with disabilities have access to employment and these tools are potentially, you know, reducing that access to employment on the basis of disability for folks who would absolutely be able to complete the essential job functions of a job, but are being screened out on areas that may not necessarily be that connected to those essential jobs.
TS:And I guess a lot of the bias that we're seeing is inadvertent. So, you know, one of the ultimate goals of these tools is to get around the problem of human bias. Aren't they doing that?
AA:There are multiple different kinds of human bias, I think is how I would answer that question. Automation bias refers to the bias that people experience and their tendency to defer to determinations made by technological or in this case algorithmic systems. And automation bias can be seen as a bias towards technology. But at the end of the day, that's still a human bias that we're talking about.
We're talking about a bias that humans inherently have to believe what a computer or an algorithmic system puts out as A or as the correct decision or output. When I talk about automation bias, the term that I sometimes use, and it's a little bit of a generalization, the term I sometimes use is the numbers don't lie.
AA:The idea that folks have that things that come from math or science or technology or computers are disproportionately likely to be correct. That's a human bias also. And so the idea of using these algorithmic systems as a way to remove human bias, I don't think we're removing human bias. I think what we might be doing is replacing one sort of bias for the other. And algorithmic systems are created by people and sometimes their own biases can be reflected in the creations of those systems. And that is not to say, to be clear, that the systems are purposely designed to have discriminatory outcomes. It is to say that sometimes they are created with under-representative or non-representative data sets that then can lead to disproportionately negative outcomes for marginalized people and for people with disabilities.
TS:Over the years, there has been research that has shown these tools be biased against women and people of color. When it comes to people with disabilities, is it essentially the same problem or are we talking about something a bit different?
AA:So I think the answer to that is yes and no. One of my concerns is with multiply marginalized people with disabilities, right? Where you have the biases that may be interacting with people of color with women. And then let's say you have a disabled woman of color who is going to have sort of those biases kind of stacked on top of each other. A lot of my work, I center multiply marginalized people with disabilities, partially for this reason, right? Because these technologies can be biased against different groups of people in slightly different ways. And to a certain extent, algorithmic systems that are biased against people of color or women, or for that matter, LGBTQ people, they can be biased against disabled people in similar ways. They also can be biased against disabled people in dissimilar ways. But I would say my area of expertise and what I focus on is the ways in which they impact folks with disabilities.
AA:And one of the reasons that I do focus on that again is just because of my lived experience. But another reason is because disability tends to not be something where there is a significant body of research on how these tools are impacting folks with disabilities. And so that's kind of one of the things that I would say we try to add to that body of literature and to that conversation.
TS:And let's go back to some of the aspects of these systems that concern you the most. So you talked about the data that they're trained on, but there's also the lack of transparency into how the algorithms make their decisions. So can you talk a little bit more about the areas that you think the developers of these tools should be paying attention to?
AA:So one of the things I talk about a lot is that I think that centering people with disabilities in every single step of creation of an algorithmic system, the deployment of an algorithmic system, the data gathering, the data collecting, and even also on the backend on auditing and that sort of thing.
Centering people with disabilities in those conversations is one way that you can sort of think about inclusively designing algorithmic systems. what I sort of, when I think about AI and I think about tech, I use the term tech facilitated disability discrimination. When I refer to sort of these disproportionate outcomes that these technologies can have on people with disabilities in so far as those outcomes are negative. The reason I use that term is because people with disabilities, self advocates and disability rights groups and disability justice groups, they understand how to combat disability discrimination. What we're seeing now is disability discrimination that's being facilitated by technology.
The rallying cry of the disability rights movement is nothing about us without us. And so to go back to answering your question, nothing about us without us means you should not be as an AI developer or developer of an algorithmic system, you should not be creating algorithms systems that could impact people with disabilities without involving people with disabilities.
AA:Nothing About Us Without Us also means that these data sets need to be more representative of people with disabilities because that can also mitigate or minimize the likelihood that they will have discriminatory outcomes on the other end. Again, like one of the cornerstones, I would say, of disability rights is Inclusive design, right? The idea that you can design things, whether they be physical spaces or in this context, digital spaces, in ways that are inclusive to people with disabilities, but that also help people without disabilities. I think that we can kind of apply these principles of inclusive design to the creation of algorithmic systems and wind up with better and less likely to be discriminatory and less risk of discrimination outputs.
TS:Okay, there is a lot to unpack there. So let's focus on the data sets because I know you've done some research on this and this talks about how, you know, better data on people with disabilities leads to better results. But it seems we're a long way from that, from reading the report, it feels like we're not there yet.
AA:So I would definitely say we are not there yet. And so this report that you referenced is entitled, To Reduce Disability Bias in Technology, Start with Disability Data. And the premise is that algorithmic systems, they create outputs based on inputs. And those inputs are, generally speaking, data sets. And when their data sets are not inclusive of people with disabilities, they are being basically trained on patterns that don't include people with disabilities and that then are going to be more likely potentially to have a disproportionately negative outcome when people with disabilities wind up interacting with these systems.
AA:And so I would not say that we are there yet in so far as creating representative data sets. And in the report, I talked about some of the reasons why. Part of it is because there is, and I mentioned this already, but people with disabilities are disproportionately incarcerated, right? There are people with disabilities who disproportionately live in institutions or group homes. Those are areas where it may be very difficult to reach people in order to gather data. Another reason that I mentioned a bit with my co-authors, Miranda Bogen from CDT and also Bonnielin Swenor from Johns Hopkins. And we've been working on this for quite some time.
Miranda has done work on demographic data outside the context of disability as well, and Bonnie has done so much work on disability data and the importance of it. And to be clear, getting accurate disability data isn't just useful for purposes of technology and for mitigating algorithmic bias. Getting accurate data on disability can also, at least in the US, that can be one of the ways in which the federal government determines funding for things like benefits and resources and that sort of thing.
AA:So getting disability data right is really important. And there are issues with reaching folks with disabilities. There can be issues with disability related stigma that folks may not feel comfortable identifying in any sort of public way as disabled. And that can also vary depending on culture, depending on geographic location. It can also vary based on type of disability. Then there are also questions about just what do we mean when we say disability? How are we defining that? And so this report, we dive into a little bit what it means to be disabled, depending on basically who you're asking and what sort of model of construction of disability you're using. And we make a lot of recommendations as to, you know, the best ways to create more representative data sets.
AA:One of which, you know, spoiler alert is to center people with disabilities who can help structure and again, inclusively designed data gathering mechanisms. But we make those recommendations, hopefully as ways to be helpful for folks who do genuinely want to create more representative data sets, but maybe just don't know where to start and don't have the expertise on disability. So no, I do not think we are there yet, but I do think that we can get there.
TS:And just to go back to what you said about defining disability, can you expand on that a little bit more? Because this is something that can complicate data collection.
AA:So one of the things that we break down in this report is four sort of different models and different constructions of thinking about disability. And we break that down as talking about one illegal model, meaning the ways in which various statutes, and that's somewhat US-based, define disability. And so we mentioned the Americans with Disabilities Act, which has a fairly broad definition of disability because it's an anti-discrimination statute.
And then we mentioned as another example, statutes related to the social security administration, which are much narrower because they are defining disability as a determination for who gets benefit. We also mentioned the medical model and the idea of the medical model being that disability is caused by individual limitations and that. Generally speaking, the medical model requires a diagnosis and generally speaking, the medical model thinks about a cure to a diagnosis as a way of mitigating impact of disability.
AA:And then we talk more about the social model, right, which views whatever hardships there may be as a result of a disability as not being a result of that disability, but more of a result of living in an inaccessible world. That limitations are connected to social, political, and economic systems rather than to potentially, you know, an individual disability. then we also kind of take it a step further when we talk about this identity model, right? The idea that people with some people with disabilities see their disability as part of their identity and that as such, if you were a data gatherer or data collector, you would want to collect disability data in the same context where you collect other demographic data. And the identity lens is usually the sort of lens through which we're talking about things like disability culture.
AA:And so there are so many different ways to think about disability. And so if we're gathering data and the question is, do you have a disability, right? Let's say it's just that broad and it's not defined. I don't think there's a way to really get accurate information on disability without defining for respondents what it is you're talking about.
Because even if it came, let's say, a government agency, someone may say to themselves, theoretically, well, I am disabled under the ADA, but I wouldn't constitute myself as disabled under statutes for the Social Security Administration. And if the question is, are you disabled? How does one answer that?
AA:And one of the things that my co-authors and I also mentioned in this report, and we mentioned it briefly, but I think it's worth mentioning here, is that when these sorts of questions are confusing, I think folks are less likely to respond to them. I think that we do need to make it as easy as possible and as accessible as possible for folks with disabilities to answer these questions, but we also need to just be clearer about what it is that we're talking about.
TS:I imagine that a lot of AI developers are probably not lying awake at night thinking about different definitions of disability. Could you just sort of break it down in terms of what are the implications for the people who are actually developing these systems?
AA:Yeah, so the first implication is that they don't have to stay awake at night thinking about construction of disability. That's what I'm here for. I lose enough sleep for all of us combined. But really, and this is the recommendation that my co-authors and I make in this report, which is just be clear, pick something.
And the purpose of bringing this to light, that there are multiple different ways to define disability is not so that people feel defeatist or feel overwhelmed. It is to serve as a resource to be helpful and to bring to light and to raise awareness just of the idea and of the fact that there are different ways to identify disability and that that is something that could be impacting the accuracy of your data collection. And so the recommendation really is just be clear.
AA:If you are asking folks if they have a disability, just put in a definition of what it is that you need. And if you are clear on one hand, but also, and again, this is separate, it's a separate recommendation, but I think it's important here too. If you want respondents or individuals with disabilities to answer your questions, you do need to make sure that your surveys are accessible with assistive technology like screen readers. And that's part of it too.
TS:So we've talked about the recommendations for developing more representative data sets. What about the tools that are already out there on the market using flawed data? How can those be fixed? Is it too late? Is it too complicated?
AA:So I don't necessarily ever want to say that it's too late. I am going to go back to something that I say a lot, which is just include people with disabilities. Algorithmic auditing is not a panacea. It is not a single cure for algorithmic systems that may be having discriminatory outcomes. But if we are going to do bias auditing, either before something is brought to market or afterwards.
AA:Like, before something is brought to market potentially as a means of marketing and marketing something as having been bias audited, if we're going to do that, it's really vitally important that these systems not just be bias audited for race and for gender, but also for disability. And one of the ways to ensure that is to, again, include people with disabilities in your auditing processes so that they can provide their expertise and you know, something else I'll say too is that these folks with disabilities that I'm going to say absolutely should be included at every step of these algorithmic processes, they don't have to be coders, they don't have to be engineers, they don't have to be computer scientists.
AA:They absolutely can be, they don't have to be. There are folks with lived experience who are advocates, who are activists, who are lawyers, their perspective is valuable even if they are not a technologist or a computer scientist, right? And I say that purposely because I think that there is a conception that, you know, to involve folks with disabilities at every step of the creation of an algorithmic process that you would need to solely focus on coders or engineers. And I don't think that's true.
TS:And for people who may have been impacted by these technologies, what can they do if they suspect that they've been treated unequally?
AA:So mean, that's difficult, right? And part of the reason why that's difficult again is that it can be very, it can be difficult to draw that connection between some sort of hiring tool and some sort of outcome for a person with a disability in a hiring process.
AA:But you know, there are local disability rights and national disability rights organizations that may have some resources and I would always recommend, you know, again, sort of going back to community to work through these sorts of things.
TS:So how optimistic are you that there's a willingness to address the problems with these AI hiring tools?
27:40
I’m not sure. I mean like, you know, I do think these problems can be solved. I don't think it is easy to solve these sorts of issues, but I do think they can be solved. As far as willingness, I'm really not sure, but I do have optimism as to the fact that there, I do think there are solutions.
TS:And one thing that I remember reading in another report that CDT published, which was actually about e-proctoring technologies, I think the report made the point that disability is so broad and so diverse. And a lot of these systems, essentially they're trained to flag atypical behavior. So it's actually very difficult to design a system that is truly inclusive. I wonder what the similarities are with these
hiring tools. I imagine it's a similar case, but you said that you're optimistic. What would a truly inclusive AI hiring tool look like?
AA:So I want to sort of go address kind of what you initially mentioned. And this is something I say all the time is that algorithms inherently are trained based on pattern recognition. That's how they do what it is that they do. A lot of people with disabilities by virtue of their disability exist outside of a pattern. And so there is sort of an inherent tension or an inherent potentially incompatibility between algorithmic systems that are trained on patterns and people with disabilities who exist outside of them.
AA:But, and again, that's why we titled this report Start with Disability Data, because if you have a more representative data set that can go into creating a pattern, such that the pattern recognition is more broad and varied, then you are less likely to have these discriminatory outcomes.
That can be in hiring, but it can also be in other contexts where these algorithmic systems are being used. And so the answer to how do we inclusively design algorithmic systems, right, it really is better data and also more people with disabilities. And not just having more people with disabilities in these spaces, but listening to them.
TS:Thank you so much for your time. It was great to speak to you.
AA:Wonderful, thank you so much for having me.
TS:Thanks to Ariana Aboulafia for a really insightful discussion. If you'd like to read the report she co-authored, I've included the link in the show notes.
Thanks for joining me on today's show. And if you enjoyed it, why not tell someone about it? I'd be grateful if you could also leave a five-star rating on Apple Podcasts or wherever you're listening so that others can discover it as well. And next time you're on LinkedIn or Instagram, look us up at Made For Us Podcast. See you next time.
AA:I always recommend that people read Algorithms of Oppression by Dr. Sophia Noble. That was the book that sort of got me started in thinking about tech as a civil rights issue. And so that is one I always recommend. My favorite album of the year so far is Maggie Rogers, Don't Forget Me. Every, every song on the album is a hit.