Artwork for podcast Absence Management Perspectives
Employer Warning: Don't Be Fooled by AI Hallucinations
Episode 1415th August 2024 • Absence Management Perspectives • DMEC
00:00:00 00:32:48

Share Episode

Shownotes

Using artificial intelligence (AI) as a tool, not a replacement, for humans will be increasingly important for employer success, explains Bryon Bass, CLMS, CEO, DMEC, and Joe Lynett, a principal with JacksonLewis P.C. in this episode. Listen in for examples of AI-related employer risks, including the four-fifths rule. Additional resources:

Additional resources:

Transcripts

DMEC: Welcome to Absence Management Perspectives: A DMEC Podcast. The Disability Management Employer Coalition, or DMEC as we're known by most people, provides focused education, knowledge, and networking opportunities for absence and disability management professionals. DMEC has become a leading voice in the industry and represents more than 20,000 professionals from organizations of all sizes across the United States and in Canada. This podcast series focuses on industry perspectives and delves into issues that affect DMEC members and the community as a whole. We're thrilled to have you with us and hope you'll visit us@dmec.org to get a full picture of what we have to offer, from webinars and publications to conferences, certifications, and much more. Let's get started and meet the people behind the processes.

oe presented on AI during the:

Bryon Bass: Yeah, I'd be happy to. As you alluded to in the article that I wrote in our trends piece, I've had some time to really take a step back and think about how AI could impact the work that we do as professionals day in and day out. And I think compliance with legal and regulatory standards is an ever present part of the work that we do, and that only gets more complicated when we introduce AI into the work that we're doing. So when we think about all the different laws that we have to comply with, the federal level, of course, Family Medical Leave act, and we've got the Americans with Disabilities act. But then we continue to have an increasing number of state and local laws that are also required for us to consider and take into account. And so when you think about it, the potential pitfalls here include eligibility determinations, there's miscalculations of leave entitlements, and there can be improper handling of certifications or other pieces of documentation that come in to substantiate the need for a leave. And so let's take each of those three into consideration. Just really quickly. So from an eligibility determination perspective, there are key pieces of HR information that are necessary to determine an individual's eligibility, and they vary based on the law, and they even vary state to state. So things like what are the actual number of hours that an individual has actually worked within the last twelve months or within whatever period that law covers. You have to be sure that within your HR system that that number is actually absolutely accurate. Otherwise, how would AI be able to make a determination? Or perhaps you have something set up so that it ignores that and you still have a human in place. But that's just one example. Secondarily, those eligibility determinations are not always finite, meaning that we continue to see change over and over and over again. So at what point does the AI get updated to understand and take into account those eligibility changes and know that those eligibility changes are going to occur this year or next month or in three months, etcetera. So those are things to take into consideration. The other is around the miscalculation of leave entitlements, which again, you have leave entitlements that vary, but then also you're leaving. You might be eligible for one type of leave, but not another type of leave, which then you need to ensure that you're not running leave entitlements under one program that you're not entitled for at the same time as you're running those that you are. And then the improper handling of certifications and documentation is one of the biggest concerns that I have, because as professionals in the field, we run into situations today where individuals don't really understand that just because you have a certification form that has pieces of information and boxes that doctors and others can fill out, they don't necessarily all need to be filled out specifically. There could be enough information just in the narrative that's being written or in other pieces of the certification form that can be used to make that determination. And so my question is, is AI going to be smart enough to take all of those things into consideration and ensure that they're looking at that form in totality? I just want to point out that recognizing these things are some of the pitfalls that we experience today in the human interaction aspect of it. And I would think that one AI could either improve that or it could fundamentally make it worse. At the end of the day, then the other area that I'm concerned with is that whole human oversight and ethical concerns aspect. So I'm a firm believer that AI should aid in the decision making process rather than replace it entirely. I don't think AI should make a final determination saying that someone is denied for coverage under any of the workplace absence laws that we have in place. It should maybe potentially alert so that a human can take a better look at that to find out what's going on and what they need to investigate a little bit further. And so in order to do that, we need to make sure we have effective human oversight in place so that we can mitigate those risks overall. And so I also think that organizations have to be cautious about privacy concerns and other ethical implications of AI use, as well. As I just spoke about earlier, when we look at things like medical certifications and medical information that's being provided for the employees, that's private information. And how do you ensure that that information remains private? Do you have the right firewalls set up, the right whatever walls need to be set up from an AI perspective to ensure that that data or that information doesn't inadvertently escape and get into the wrong hands?

Heather Grimshaw: Those are such great points, Bryon. Thank you so much. So much to break down there. Joe, it would be great to hear you weigh in as well.

Joe Lynett: Yeah. I share Bryon's concerns about the use of AI, and not only that, but this is on the US Department of Labor's radar screen. On April 29, the DOL issued a field assistance bulletin to all regional administrators and district directors. And the subject of it of the bulletin is artificial intelligence automated systems in the workplace under the FLSA and other federal labor standards, and outline many of the points that were just made right, about how AI is used to calculate hours, review certifications, and integrate into an employer's FMLA leave administrative process. And the two points that the DOL makes that pose the biggest risks of the use of AI, while recognizing that it can streamline many tasks and make life easier for the professionals that are administering leave. They do note that human oversight, or the lack of human oversight, is a risk, a major risk. And the other is setting up AI systems that create systemic violations of the laws that the DoL enforces. So, for example, in addition to the risks that were already mentioned, the DoL does point out that AI systems that are used to track leaves can potentially violate the FMLA, where those systems don't distinguish between the types of leaves taken by an employee. And under the FMLA, you can't use the taking of FMLA leave as a negative factor in any employment decision. So that's an example cited in this bulletin of an AI system, or a way in which an AI system can create almost systemic violations of the FMLA the significance of the bulletin is that since it goes out to regional directors in the various regions of the DoL, I think employers, when there are FMLA investigations or even wage and hour investigations, they will start to see requests regarding AI systems that an employer may use in order to track hours, review certifications, and so forth, because I think these will become part of the standard DOL investigation when the DOL is conducting an FMLA compliance audit, or there's been a complaint filed by an employee alleging that they were denied FMLA leave or their FMLA leave rights were interfered with in some way. So these are all, you know, these are all issues that I think employers are going to need to grapple with. So, you know, it's. And going forward into the future, you know, employers really need to, you know, as Bryon said, you know, they're going to need to come to grips with how they're using AI and that it is not a substitute for human judgment, but it is just that it is a tool. It is a tool to aid in decision making, but not supplant human decision making.

Heather Grimshaw: Jo, I think during the compliance conference, you made the comment that the machines shouldn't be running things, and the whole room laughed. Certainly an important point there. So, in the recent DMEC AI Trends article, Bryon shares his experience testing an online AI tool and finding some inaccuracies. And during the compliance conference presentation, Joe, you shared an example of a lawyer who relied on an AI generated, or relied on AI generated information for a court case that didn't go well. It would be interesting to hear you both share takeaways from these examples and what lessons you hope employers will draw from them. Joe, let's start with you here and then ask Bryon to chime in.

Joe Lynett: Um, the lesson learned is sort of where. Where I just left off. Right. Which is you need human oversight. Right. You. You can't check your judgment at the door. I gave the. The example. It's. It's not an example. It really happened. Right. Of the attorney in New York, in opposing a motion to dismiss based on the lapse of the statute of limitations, used an AI tool that. That found great cases to support their opposition to the motion. But it was a classic case of AI hallucinating. Right? Well, hallucination means that, you know, the AI produces what appears to be credible, objective information that happens to be false. Right. And in the case of the lawyer in New York, there were cases that really didn't even exist at all. So from a lawyer's perspective, when you see cases like this, that seem to go, and you're using an AI tool that seem to be so good, it's hard to believe that they exist. You actually should be checking as to whether they actually do exist. And just to be. Just to ensure that the AI tool you're using isn't hallucinating, so to speak. And that's what happened to this lawyer. His firm or an associate used an AI tool, put found cases put together. What was a compelling argument as to why the case should not be dismissed or why the statute limitations did not expire based on cases that were completely fictitious. So the takeaway is these are tools. There are tools that can create efficiency, but you cannot check your judgment at the door. You need to dig a little deeper just to verify that the information you're getting from your AI tool is accurate.

Heather Grimshaw: And that's kind of a perfect segue into the examples that Bryon shared in his trans people. Bryon, will you elaborate a little on that?

Bryon Bass: Yeah, I'd be happy to. So there have been some tools that have been introduced out there on the Internet that purport to provide individuals with the ability to ask certain questions around state leave entitlements, especially around the state paid family medical leaves that we're seeing. The one in particular that I was looking at was with respect to New York, and I asked it a series of questions just to test the accuracy. And I thought I asked some pretty simple questions. I was very clear about asking about my entitlement under the paid family leave in New York. And I specifically said that it was for my own medical condition. And the tool said that I was eligible for paid family leaves. New York paid family leave, which those of us in the know know that your own serious health condition is not a qualifying event under the New York paid family leave. So I, you know, erased that and decided, let me try something else. Let me see if it will calculate my benefit correctly, if I am eligible or have a qualifying paid family leave event. And so I did that, put in some information, gave an average annual salary or, excuse me, average weekly wage, and asked it to calculate what my monetary benefit would be. And it also calculated that incorrectly. I provided information back on the tool saying, you know, giving it the proper information and telling it what it did wrong. And subsequently, in future months, now, some three or four months after that actually occurred, I have gone back and the tool has been updated. So the point here is that, as Joe just said, you can't rely on this everything to be accurate. Hallucinations are happening in these tools, and they're going to continue to happen because the tool's only as good as the information that it's being fed. And in many instances, some of these AI tools are able to go scour and search and go on the Internet and piece things together, and they're not always going to do that accurately. Again, I go back to the human component associated with many of the questions that we receive at DMEC and many questions that I've received from practitioners throughout my career. I have been prefaced generally with someone coming to me and saying, but I found this on the Internet, and I found this on the web and it says x. Well, just because you found that on the Internet doesn't mean it's accurate. What's its relationship to the laws and the statutes and the regulations that might be out there? There's another step that you need to take. So this need for continuous monitoring and improvement is going to be critical. We need to ensure that things are regularly evaluated and they should be improved based on feedback and real world performance so we can ensure that there's accuracy and reliability. Unfortunately, are there enough people out there that can actually take a step back and do this continuous monitoring and improvement in our space as things continue to change and it continues to become more and more complex? I don't know. I don't know the answer to that question. But human oversight, as Joe also stated, is a huge component of what we need to take away from this conversation, is that you can't just leave the decision making up to the machines. We need to ensure that the decisions are accurate, especially when those decisions can have a detrimental impact per se on an individual's right to their benefits or right to job protection, etcetera. And overall, the other thing we need to be aware of is that there are going to be more and more and more AI related tools are going to be introduced as time goes on, probably faster than we have seen any other adoption of technology. And as employers, we have a responsibility to ensure that if we're using tools in our processes, that we have a responsibility to train and educate the users of those tools on how they actually work and what are their capabilities and what are their limitations, and what can you do when you identify inaccuracies and how do you address them? So we have a lot of work to do in this area. I think we're all trying to be as cautious as we potentially can, but I know that there are individuals out there today who are relying on AI tools, and some of them might be relying on them a little bit too heavily at this point and might need to reconsider that approach.

Heather Grimshaw: That's such a good point, Bryon, and I really appreciate what you noted there about that education component, because while employers might be approaching AI very carefully, the fact that employees can access these online tools and get information that they may trust could create additional complexities, not only for those employees, but also for the employers. So I think it's, it is interesting to hear, though, that the tool fixed those issues, that learning opportunity. So both interesting as well as a little daunting?

Bryon Bass: Definitely.

Heather Grimshaw: So. During presentations at the DMEC compliance conference, representatives from the Equal Employment Opportunity Commission and the Department of Labor warned employers to pay close attention to the four fifths rule. Joe, will you provide listeners with an overview of this rule and why why it's important for employers to pay close attention to it?

Joe Lynett: Sure. The four fifths rule is used by the EEOC and DOL as a general rule of thumb for determining whether the selection rate for one group is substantially different than the selection rate of another group. The rule states generally that one rate is substantially different than another if their ratio is less than four fifths or 80%. So that's why you have the four fifths rule. Just a quick overview of what a selection rate refers to. The selection rate generally refers to the proportion of applicants or candidates who are hired, promoted, or otherwise selected. The selection rate for a group of applicants or candidates, it's calculated by dividing the number of persons hired, promoted, or selected from the group by the total number of candidates in that group. So, as an example, suppose that 80 white individuals and 40 black individuals take a personality test that is scored using an algorithm as part of the job application, and 48 of the white applicants and twelve of the black applicants advance the next round of the selection process. So, based on that example, these results, the selection rate for whites is 48 divided by 80 or I 60%, and the selection rate for blacks is twelve divided by 40 and or the equivalent of 30%. So, in this example, the personality test scored by an algorithm. The selection rate for black applicants was 30% and the selection rate for white applicants was 60%. So the ratio of the two rates is 30 over 60 or 50%. So because 50% were 30 over 60 or 50% is lower than four fifths or 80%, the four fifths rule says that the selection rate for black applicants is substantially different than the selection rate for white applicants, which could be evidence of discrimination against black applicants. That's the rule and example in a nutshell. So I hope that was clear enough.

Heather Grimshaw: I'm not good with math, but even I was able to follow along there. So thank you for that. So, Bryon, would you share some context for what you're hearing from DMEC members and some of the challenges that have been identified?

Bryon Bass: Yeah, one of the things that I had the conversation with after the EEOC gave their presentation at the compliance conference was, well, what does that actually mean to me in terms of the work that we do in this field? And, you know, I think if we take a step back and really understand the people that, that we're serving, one of the more obvious types of protected groups that you need to be concerned with is those with disabilities. Right. So any, the selection process that we're talking about isn't just necessarily limited to hiring practices. Right. The selection, the selection criteria, as was addressed by the EEOC, can be applied to any decision making process potentially, that is used to determine, you know, to determine whether or not someone, you know may or may not be afforded some rights that they're afforded under law. And so we need to take that into consideration and understand that, based on some of the examples that we provided earlier in this conversation around where there might be some pitfalls, is that that four fifths test might be applied there at some point. And so the question is, how do you ensure that you are limiting the bias and the discrimination in these processes, especially from an AI perspective, especially considering when there's increasing evidence and there's increasing chatter out there among experts around AI, that AI has a potential to actually increase the amount of bias that is present or amplify by the amount of bias that's already present in our society. So that's concerning. Right. The other is around transparency and accountability. We have an increasing number of employers that are outsourcing their administrative and their overall program management. That doesn't absolve an employer from their compliance requirements or from discrimination charges of discrimination, because their program administrator may have engaged in that. So it's important that employers that are utilizing tools, either software that they may be using internally, or they're engaging with an external vendor or supplier who's providing their services, and that they are using AI in their processes, that they understand how those AI tools are being used, how are they being used in the decision making process, and how are those being audited? How are you aware if there's any changes that are being made to those? This is going to become even more and more and more complicated for employers and supplier relationships, because in my experience, I had already run into situations where, as software updates are applied, to improve whatever is happening in the claims management continuum. There were an increasing number of employers that wanted to understand what those changes were before they were actually implemented. I think this is just going to add to that level of complexity in terms of that transparency and accountability that needs to occur between the employer and the supplier in these processes.

Heather Grimshaw: That's a great point, Joe. During the DMEC compliance conference in March, you mentioned that you were not surprised that the first laws regulating AI are being passed locally. Can you talk a little bit more about that?

Joe Lynett: Yeah. The state and municipal legislatures seem to be a little bit more nimble in enacting laws that address sort of cutting edge edge issues than Congress seems to be. I mean, for example, even with the Pregnant Workers Fairness act, which got passed relatively quickly over a fairly short amount of time, it was really though the states and municipalities that enacted laws protecting pregnant workers and giving rights of accommodation due to pregnancy, even though it was universally recognized in this country for a long time, the need for such protection, since the ADA did not protect pregnancy or give a right to accommodation due to pregnancy in most cases. And it was really the state and municipal legislatures that, that filled that glaring need and gap. And you see the same thing happening with AI. It's become much more part of the public discussion, particularly in the last year. And so what we're seeing really is what we see with cutting edge workplace laws, that the state and municipal legislatures seem to be more nimble in getting those laws enacted much more quickly than the Congress is. And that's what you see happening now.

Heather Grimshaw: So my last question for you both is what comes next? In other words, what advice would you give to employers of all sizes that are investigating AI options to streamline operations and boost efficiencies. Bryon, would you kick us off here and then we'll ask Joe to weigh in as well?

Bryon Bass: Yeah, I'd be happy to. I really think that all things start with policies and guidelines. And so it's important to fundamentally ensure that there's comprehensive policies that outline acceptable AI use to ensure that, again, it goes back to that training and education we were talking about earlier as well, and that is what AI tools are appropriate. How are they used? What are their capabilities and limitations? What should be the check in processes to ensure that they're working as you thought they were and that they're doing so accurately? I also think that it's important to work on this fostering of transparency and accountability. I spoke about that in terms of employers who are outsourcing or using software and their processes to ensure that there is transparency and accountability for everyone that's involved in the process. And to recognize that just because you have a supplier or a vendor involved in your processes does not mean that you're absolved of any responsibility as an employer. Your responsibility is to ensure that your employees are provided with the rights and entitlements that they have under the law. And you also have a responsibility to ensure that they're not intentionally or even unintentionally interfered with. So you really need to ensure that, that transparency and accountability is there. And I think the other thing is, this isn't everything that we do, but especially here is this is going to, is evolving at a very, very quick pace. It's going to continue to evolve at a very quick pace. And we need to find ways to stay informed and adaptable to keep up with the latest developments in the technology and regulatory change, because regulatory change is going to continue whether we want it to or not. And we need to be proactive in adapting to those new advancements and compliance requirements so organizations can stay ahead and avoid any potential pitfalls down the line.

Heather Grimshaw: Really important points there. And, Joe, would you like to weigh in as well?

Joe Lynett: Yeah, I mean, Bryon makes, I think, just a cogent point here, that if when you purchase and use an AI tool as the employer, you own it, you're responsible for it and its use. So if it is being used in a way that makes an employer run afoul of various laws, it's not going to be an explanation or an excuse that you didn't create the AI tool. So in terms of some of the policies that I think you were mentioning, Bryon, a sensible policy is, look, vet the vendor, vet the company who is, who you're considering purchasing your AI tools and software from. Because employers aren't developing these tools. Other companies are that, that are viewing employers as a very large market for their products and services. So, you know, and they are not likely going to indemnify you if you're, you know, if you, if the employer is sued based on the use of that AI technology. So, you know, it's, and, you know, Bryon's absolutely right. This stuff is happening fast, and it's, it's going to be very fluid and it's going to be very hard to catch up with the current state of affairs with AI because once we've mastered one stage, we're off into another one. So I think it's a process that employers need to go into carefully, deliberately, and with eyes wide open as to the implications. Both the upsides. Right. The upsides are pretty obvious. And the downsides, which, the downsides are running afoul of legal compliance. And I think we're going to see more claims arising from the use of AI as employers increasingly use AI. I mean, there's no question that AI will be used more and more by employers going forward because it does create some pretty compelling efficiencies.

Heather Grimshaw: Absolutely. Thank you both so much for weighing in on this issue.

Bryon Bass: Thank you, Heather.

Joe Lynett: Thank you.

Chapters

Video

More from YouTube