This podcast episode delves into the critical issue of gender bias in artificial intelligence, highlighting how biases can perpetuate existing stereotypes and inequalities in the workforce.
Dr Anna Paraskevopoulou, an Associate Professor at Anglia Ruskin University, shares her insights on how AI systems often reflect societal biases, particularly in representation and professional roles.
Through a compelling experiment, she demonstrates how AI-generated images predominantly depict men in high-status professions while associating women with lower-paid roles.
The conversation also explores the intersectionality of social class and its overlooked impact on AI, emphasising the need for diverse teams in technology development to foster inclusivity and equity.
Listeners are encouraged to reflect on the implications of these biases for younger generations and the importance of critical engagement with AI technologies to create a more equitable future.
Takeaways:
Social class is not one of the protected characteristics, so sometimes we either overlook it or we don't pay enough attention to it.
Voiceover:You're listening to WithAI FM.
Joanna Shilton:Hello and welcome to Women WithAI, the podcast dedicated to amplifying the voices, experiences and perspectives of women in the ever-expanding field of artificial intelligence.
e Cambridge AI Summit in June:But before we get into the conversation today, let me tell you a little bit more about her. Dr. Anna Paraskevapulu is an Associate professor of Management at Anglia Ruskin University.
She's also the faculty Athena Swan Lead and Faculty Lead for Safe and Inclusive Communities in the Faculty of Business and Law at Anglia Ruskin University.
Anna is a Fellow in the Global Labour Organis and her primary research focuses on work and inequalities, particularly on employment experiences in the contemporary neoliberal economy. Within this theme, her work examines equality and diversity, the digital divide, and technological bias and disadvantages in the labour market.
Anna has co-authored papers with titles including Workplace Equality in Europe, the Role of Trade Unions and Undocumented Workers, Legal Status, Migration and Work in Europe, and her research findings have been picked up and utilised by various national and international organisations such as Oxfam, the International Labour Organisation, the European Commission and local authorities in the UK.
is also the recipient of the: Anna Paraskevopoulou:Thank you, Jo. Thank you very much for the invitation and I really look forward to discussing this topic which is both emergent and very interesting to many people.
Joanna Shilton:Yeah, fantastic. Well, it's great to have you here. Congratulations on the award for the paper, by the way.
Anna Paraskevopoulou:Thank you. Thank you.
Joanna Shilton:And maybe we can start and perhaps you can tell us how you got into what you're doing, your journey into AI and yeah, kind of what got you interested in all of this.
Anna Paraskevopoulou:Yeah. Okay, thank you.
As you already introduced me, and I'm an Associate Professor of Management in the Faculty of Business and Law at Anglia Ruskin University, and I teach Organisational Psychology to MBA students and Master's students. My research journey is quite long, and it has been focusing on work inequalities and the experiences people have in terms of inclusion and exclusion.
I focus on how different characteristics, such as, for example, gender, age, and ethnicity, intersect with one another to shape career paths and what is the impact of these intersections on individuals, conditions of work, feelings or kind of behaviours.
Within this context, the rapid technological advances and the development of a wider use of AI, especially in recent years, this creates new questions on inequalities and forms of inclusion, for example, on digital divide or digital exclusion.
Who is able to use these technological advances and who is not able to take this opportunity on the environmental impact of AI and what type of disparities they may occur as a result of this, or perhaps the type of biases that the systems themselves encompass, and then they can perpetuate or amplify. For example, the gender bias, which you mentioned, I discussed in the AA Summit in the AI Summit in Cambridge.
Some of these areas is what my current work explores.
Joanna Shilton:And so, yeah, your presentation at the AI Summit, the title was Gender Bias in AI Manifestations and Consequences. And I guess it's like you're looking at a theory behind the problem. Can you walk us through some of the sort of takeaways?
Anna Paraskevopoulou:Yeah, absolutely.
As you can appreciate, there is a huge body of literature on this topic, and I think it is fair to say that it is actually developing because the more AI advances, the more technology advances, the more researchers try to understand the social and economic changes in the labour markets, in the types of impact they have on skills, on how tasks, for example, are being distributed in workplaces, what is the division of labour, the type of new job categories that are emerging, the type of contracts people are negotiating, and also security, how secure we feel in our workplaces, and the type of trust, perhaps, that exists between us and the machines, but also us and the employer.
Within this new context, despite different conceptualisations of these emergent themes in our society, there is a general agreement that in fact, the world of work is changing.
rom earlier on, Even from the: eminists like Cockburn in the: proach, also developed in the:And this can be associated with the type of diversity that may exist among, amongst the workforce, amongst the people that they are behind the machines, behind the systems and processes, the ones that they develop the algorithms.
For example, it is quite well documented in social sciences literature that when we have lack of diversity in teams in terms of, let's say gender, ethnicity, age, but also other characteristics, this in fact has an effect on how systems themselves are being designed, but also are being implemented.
So this is the main thinking behind it is the feminist theorists, the feminist scholars that they have identified that gender dynamics in fact are not being accounted.
When we discuss technological developments, the disparities that exist in terms of opportunities and usage of these new technologies, but also the reproduction of existing biases by the systems themselves because they have been designed by non-diverse teams.
Joanna Shilton:Yeah, so really that's just perpetuating the gender, the bias, isn't it?
Anna Paraskevopoulou:Exactly, exactly.
Joanna Shilton:That's just going to lead to massive consequences.
Anna Paraskevopoulou:Exactly.
Joanna Shilton:So maybe you could talk us through some of the, like the most sort of striking examples that you came across.
Anna Paraskevopoulou:I can start with the one I did with the one I presented in AI, based on the best, based on the existing literature and what it was, what has been developed and discussed in recent years and. But also because for the needs of the presentation, I decided to do a little experiment myself.
So, the AI Summit presentation was entitled Gender Bias in AI Manifestations and Consequences. So what I did was to ask AI to produce some pictures for me.
I decided to look at highly skilled professions because I do know from the literature review there is often a bias that these professions usually occupied by men. So I asked the AI to give me a picture of an engineer, an electrical engineer. Immediately, it produced a picture of a man.
Then, I asked the AI to produce a picture of a surgeon. I made sure that I did not use any pronouns to make sure my language was very neutral. They immediately came with a picture of a man.
Then, I asked AI to produce a picture of a business person. I put a CEO. I didn't say a businessman, I didn't say mail or anything, I just said AO immediately, it came with a picture of a man.
Then, I decided to go for a profession which also I know lacks diversity. I asked for the legal profession, I asked for a picture of a judge, I got another picture of a man. I decided to ask for my own profession.
So I asked AI to give me a picture of a professor and I got another picture of a man. I also asked for a lecturer because it's lower rank. I also got a man.
Then I asked for a picture of SF and I was given a picture in a quite nice, good-looking kitchen, quite posh kitten, I would say. And, of course, the chef was a man.
To be quite fair on AI, it asked me whether I'm satisfied with these pictures and asked me if I would like a different version of a picture, perhaps a different angle or perspective, perhaps a different background, change the time of the day, and also if I wanted different aspects, for example, close up on tools and so on. Interestingly, it didn't ask me about gender or any kind of social characteristics of the people in the images.
So although it asked me for filters, that was not included. My second experiment was to go for and ask for jobs that perhaps they are not. They are more associated with women.
So I asked for a picture of a teacher this time versus the lecturer and the professor. I immediately got a picture of a woman with children in the classroom. I asked for a picture of a nurse and I got a female nurse.
Asked for a picture of a care worker, he was a female. I asked for a picture of the receptionist. Immediately I got a female. So that is versus to the co.
The CO was a man, the receptionist was a woman, the surgeon was a man, the nurse was a woman, the professor was a man, the teacher was a woman. And I decided also to ask for a cook. This time I used the word cook and I said a cook in a school. And I also got a woman for this.
So the chef is a man, but the school cook is a woman. I would also like to say that all images were predominantly white. So all of the people that I got picture of, they were white, quite young as well.
So, there was a little bit of an age bias. Then, I decided to ask AI to draw a picture of. Not to give me an image, a real image, but to give me a picture, to draw a picture from scratch.
So I asked for a picture of a sports instructor working outdoors, and I got a picture of a man. It was an image rather a drawing of a man training other men.
There, in the very background of the picture, there is one Woman, she's also training, and she's wearing different gear from men. She still looks athletic, but men are wearing shorts and attracts and a top or T-shirt.
The woman is wearing very short shots and a kind of bra looking like what we are wearing in yoga. Then some more sexy picture if you like. Then I asked it to draw, to give me a drawing of a teacher also teaching yoga, also outdoors.
It gives me a pretty picture with birds and flowers and trees with the instructor forward and is definitely a woman. And she's teaching only women this time. So in the yoga classes there are no men.
So this is one demonstration of the gender bias that I found myself when I was preparing my presentation for the AI Summit.
Joanna Shilton:Wow. I mean, I hope maybe we can share a link to your presentation or the images at least so people can see. Because you're right.
And that's just, it's just reinforcing the stereotypes, isn't it? And as you say.
Anna Paraskevopoulou:Exactly.
Joanna Shilton:The ladies in the crop top and the short shorts and. Yeah, the men are just in long shorts and a T-shirt and. Yes, and you're right. And they're all white, and they're all good-looking. Yes.
I mean, there's got to be real world implications there.
So I mean, and it's, you know, I mean, as a, as a sort of fully grown adult, you know, and you sort of think, well, you're aware that it's not real and we're all kind of like, well, hang on, AI is new, and this is, we know that that's not a realistic representation.
But you know, I imagine there's a real concern because it's, you know, for younger generations, you know, I mean, they're just going to grow up with this, and they're absorbing it, these messages and these stereotypes. What, I mean, what dangers do you see for young people growing up?
Anna Paraskevopoulou:Before I respond, I just thought, amazing. Also the age bias. The women were all young, all of them. The men, the two professions that they were not young. It was the judge and the professor.
And the professor, those two looked a bit older. The men in all the other ones, the engineer, the surgeon, the CEO and the chef, they were all, were kind of a younger man.
So there is a certain, there are various biases in these pictures.
But going back to your question, it's a very good question this and one, we should all think about it because it does have consequences on how societies are being shaped in future.
AI bias can affect young people because, excuse me, they tend to engage with technology more widely, but they also use technology More widely, for example in education, in how they use services and so on.
So some of the effects that they have been identified by various research activities is of course, as you already mentioned, perpetuating existing stereotypes. And this can actually be quite harmful. Gender stereotypes.
For example, as I discussed in my experiment, the AI-generated image used assumptions about gender roles. It assigned gender roles. For example, the university professors or highly skilled people to male were to the less paid jobs, perhaps lower paid jobs.
They were being attributed to female. The same biases exist for leadership challenges, for example leadership roles.
It has been identified that AI often associate leadership positions with gender stereotypes are in male. So, that is not very good role model or kind of experience for young women using AI.
But beyond the perpetuating stereotypes, there are other inequalities that they may exist as a result of these biases that exist within AI. For example, accessing opportunities.
If AI systems are used in recruitment, and they are actually increasingly being used for recruitment purposes in jobs and so on, or admissions to education or colleges or other type of establishments, then the screening processes themselves may, in fact, may disadvantage women or ethnic minorities or older people who have people with particular minority characteristics.
So what it means is that perhaps women or ethnic minority people may apply for job and may be rejected because they have not fulfilled the standard type of the biased AI who is assessing the application as a result. There also I think we must discuss that there are ultimately economic disadvantages.
For example, people may not be able to access loans or mortgages from the bank or other kind of form of credit. And this does have an effect on innovation on entrepreneurs.
For example, young women entrepreneurs who may want to start a new business, they may not be able to obtain the loan they need in order to start this business. So therefore, these barriers generate and reproduce the inequalities and economic disparities that they are already present in our society.
And if people are not being assessed by humans who perhaps are more flexible and more creative, I know they probably have inequalities and so on, but they also have training.
It means that the system becomes completely inflexible and it does not provide disparity, it does not provide any opportunities and enhances disparities in terms of health.
Some researchers have also found that, for example, if data that it is included incorporated in AI systems is more kind of male-centric, it may not provide adequate information for diagnosing or providing solutions to providing treatment for some specific gender issues. So that could be, for example, reproductive health for women. It's one example that has been studied. So A more gendered specific data.
We need to ensure that AI are using more gender-specific data to ensure accurate diagnosis for everyone, not just women, but everyone. And of course it does have an impact on psychology of a person.
If an individual who is using AI extensively and does research and so on, if she or he is exposed on biased content, it means that these users may in fact internalise these stereotypes and they may feel excluded from the society, from particular groups of people, from particular types of jobs and that creates a feeling of not belonging, not being part of the society. Therefore they may feel isolated and that can create stress and could lead to mental health issues.
So, addressing these biases requires systemic changes.
So ensuring, for example, that there are diverse teams working in the development of AI, it means also revisiting the type of training data that exists in order to make it much more inclusive for anyone that is developing AI systems and also people that promoting AI systems and also a better literacy amongst young people to understand the gaps perhaps that they exist in out today's AI system, perhaps in future that it will become better and better and predict. I'm pretty sure it will. But young people to develop an AI literacy in a critical way so they can make better use of these existing technologies.
Joanna Shilton:Yeah, I think that's really important.
That's a really important message to get out there because it, yeah, it's all very well just accepting it and using it, but you're right, younger people especially need to, well, they need more life experience, but they need to be trained that it's not right that just because it's thrown up that picture and just to go back to what you said earlier about when you put in the images and then it asked you, are you satisfied with this? I think that's the really important bit that lots of people miss as well.
To say, well actually no, you've just given me men, or you've just given me women, or you've just given me pretty young people. I want a real picture of a real person.
Anna Paraskevopoulou:I mean, I'm sure if I asked it, you would have done it.
But what I found interesting, interesting is that in the criteria for developing the pictures further did not include any specific gender-specific sort of criteria.
It asked me for other things, but it did not occur to AI at that moment to ask if I would like different characteristics in terms of age or gender or ethnicity. It just took it for granted that that's what it was.
Joanna Shilton:Yeah, and I think that's the point, isn't it as well? Because it's not, you know, AI isn't at a stage where it's thinking like a human. It's all it is. It's.
It's just the data, it's the maths, it's the predicting. And if it's just being fed that bias. And as you say, it's not, you know, this is women with AI, but it's not just women. It's.
The bias is like race, age, disability, religion.
Anna Paraskevopoulou:Exactly. And social class, which is often. May be a little bit less, you know, visible. Yeah. But it does exist there. Yeah.
Joanna Shilton:Yes. Maybe you can expand on that a little bit more on how this sort of these. Yeah. Vices manifest themselves.
Anna Paraskevopoulou:I think this is kind of a more growing. More growing area of growing interest because social class is not one of the protected characteristics.
Joanna Shilton:So.
Anna Paraskevopoulou:So sometimes we either overlook it or we don't pay enough attention to it because there is no legal coverage for it, although this is general human rights and so on coverage. But still we need to better understand.
I don't think we do fully understand how these biases, if social class biases, how they manifest in the use of AI systems. I think a little bit more research is needed on this area and especially on how this.
I think I did discuss in the beginning about the intersecting characteristics.
And this intersectional element, or understanding, if you like, of how inequalities are manifested in the society but also in smaller spaces like workplace education, health and so on, is quite an important part of our understanding of what truly creates disparities and social class. It's an area that we also need to include, if I may expand a little bit more, the intersectional element.
This was first introduced by the black American feminist scholar Kimberly Crenshaw, who argued that in fact, biases are not experienced in isolation, but they experienced as a result of different intersections between characteristics that people have. And what is very interesting in Crenshaw's work is that she does actually include a very strong social class element.
So it's a race, class and other characteristics can be taken into account. So how does this manifest itself on AI?
Well, the AI system may perform differently when we ask, or when we are looking for combined characteristics, for example, of race and gender. And that may actually lead to compounded and multiple disadvantages for some groups.
For example, black women, I think in the experiment I did that demonstrated, well, I mentioned to you aids, a race, and of course, the type of profession, skills and so on, which that does denote social class as well, in a way. So better understanding is needed and kind of more focus is needed for both.
For Society itself, but also for researchers when they're conducting research on AI systems.
Joanna Shilton:Yeah, so really, I mean my takeaway from that is just how important it is for women and well to have a diverse representation. So it's not just like a white male that's creating AI contact, it's got to be co created so we can break down those biases.
Anna Paraskevopoulou:Exactly, yeah.
Joanna Shilton:And yeah, I mean. Yeah, tell us more.
Anna Paraskevopoulou:Okay, well I think if we discuss a little bit more about this element of lack of diversity, diversity in the technology sector.
I think this is quite well established fact that more men have been attracted to the sector and they tend to be more white male population working in this area.
So the participation of women, but also other people from other minority groups can in fact be very helpful because it will create a different kind of feeling within the sector, if you like. So diverse teams will be able for example, to identify and correct during the design process of AI or other technology applications.
So the systems produce, then they're going to be less likely to reproduce biases and stereotypes because the more diverse the voice is, the more diverse the approach is and the easier it is to spot and smooth out these discrepancies. Design itself also will be more inclusive.
So because it's, it is kind of people from different experiences, they're going to have their voices heard or, you know, their skills put into this design. This means that the systems themselves are going to appeal to wider population.
For example on healthcare disparities or reproductive health, they may be appealing to people, to younger women or to older men that may have a particular problem, for example, that other part of populations don't, and so on.
So in general, the studies that exist, they have shown that they have proved that when teams are diverse then they tend to provide better quality products that they have for wider audience and they tend therefore to be much more innovative and also more successful. Also, I think I talked to you a little bit earlier about trust.
It means we're going to trust the system more because we are going to perceive these systems or these processes that exist as being more accurate. And this is imperative.
For example, on using AI during selection and recruitment processes, at the moment there is suspicion that inequalities are occurring, that people are not accessing jobs, for example, precisely because they have been profiled in a wrong way by AI systems. If the systems improve, then recruitment and selection processes will become much, much fairer.
And also we're going to trust them to do that as well, because trust is important.
I think including more women and more diverse population in general in the technology sector, but also in the important area of designing AI systems, what it means for the profession itself is that the better, more equal practices in the STEM field, so people that study STEM subjects are going to have a better opportunity to find a job, but also to bring in their skills into this area of work. And of course, the technology sector itself will be enriched because at the moment is not as diverse.
I think also we can say to an extent as a result of all of this, that a better design system from more diverse teams will create ultimately better opportunities for women in the labour market, but also for other minority groups.
And also not only better employment opportunities, but also better opportunities to develop new skills and to skills that they can be more adaptable to the new challenges of the kind of the contemporary systems, economic systems. What is needed in the labor market? Yes, I think, I think that is, that are the main areas I can think of at the moment.
Joanna Shilton:No, that's great.
And I absolutely agree with everything, you know, that you've been saying and saying that it just makes so much sense and the fact that there are so many opportunities and this is the time to get involved and for everybody to get involved and not just let one group kind of like design it and do it there, you know, there are, there are so many skills. This is time to get involved.
And I know we mentioned in your intro that you're looking into the digital divide, and obviously that must, that includes like digital skills and skills training because there's lots of stuff out there, but what if you don't know how to do it or where to start? I mean, can you tell us a little bit about your work there and about the digit? Yeah. Is there a digital skills gap?
Anna Paraskevopoulou:Yes, there is definitely digital skills gaps. There are always skills gaps, and they're always going to big scales gaps.
But what it is important is for us to be vigilant and make sure that we equip young people and older people, all parts of the population, to, in fact, make these gaps smaller.
So therefore, one way of doing it is researching the digital scales and also the digital gaps that they may exist amongst the different parts of the population. And because ultimately it's about inclusivity, it's about who is included and who is excluded.
The more research we do and the more effort we put in closing these skill gaps, then the more we are able to correct the disparities that we have already discussed and enhancing digital skills in today's society.
In fact, it does help people to kind of adapt in the requirements of the contemporary labor market because they are able to have good digital literacy skills. And of course, this translates to better employability, better prospects, better career paths.
Also, in all my research, what I found on more disadvantaged members of our society, they have a lot of problems on accessing services, and they often have these problems because they do not know how to use the digital system that they are already being kind of implemented by various organisations.
I mean, how often have we had an example where parents would ask their children, please come and help me to fill in this form, to go into the hospital or apply for this, or to get my, you know, my benefit and so on it.
This is a skills gap, and this is something that everyone can, the organization can do something about to help people to better understand the new systems.
It also helps to increase diversity in the technology sector itself, because it helps, because it becomes more diverse, it becomes more equipped to understand what are these gaps and how the systems themselves can be adapted in such a way to be more accommodating to people that perhaps do not have really excellent technology skills. So they can be kind of more easier to use, easier access.
It's also excellent for the education sector because the education sector can learn a lot from both AI, AI uses how AI is being designed, but also technology in general because it can take advantage of these developments and can become much more creative itself in the way it delivers, for example, knowledge to students in a way that conducts research, pedagogical research, to better understand the needs of the students, the needs of the audience, and also to be able to provide more creative and more critical skills to the students that they so desperately need to.
Today's society, which has become very competitive, today's labor market, I should say, has become very competitive and with requiring updating of skills all the time.
As you mentioned about the skill gaps, that's why I said there are always going to be skill gaps, and the more we advance, the more skill gaps they're going to be identified.
Joanna Shilton:But it's using it and you've got, you've got to use it to improve it, and then that benefits everyone. I mean, AI is moving at such a rapid pace. How do you keep up on, you know, with the later end, how do you keep on top of all the emerging technology?
Anna Paraskevopoulou:I think this is a challenge for everyone, but this does not mean it's not a welcome talent. I think in my view, it's something that I do like to experience.
I do like to understanding, I do like to see how it works, and I do like to see Its future. The introduction of AI, I think it does open new opportunities and new paths to our learning and teaching abilities.
We test ourselves in a better way, and I think as a result we can become much more creative. I know there's the argument where the AI kills creativity. I don't think. I think creativity goes hand in hand with humans.
Humans are always going to find ways of being more creative. And I think that's how we should use AI in order to become. Become much more creative.
But at the same time, we must also enhance our own critical skills. We should question AI system.
We should question AI structures, too, because in that way we will understand we have a better understanding of how they work and what interventions we must make in order to make them better for everyone. For example, to understand what effects they have on our natural environment.
It has been reported that actually they do have damaging effects on our environment. How can we go on to correct that but also understand the effects it has on human relations?
And I think that's what we've been discussing throughout this session.
So, both of these examples, both examples are the two fundamental aspects of sustainability: the environmental sustainability and the social sustainability. And therefore, at the heart of it is our understanding is our critical ability to ensure that we have better functioning systems in the future.
So, for my own students, I always encourage them to follow the latest development because developments, because that is advancement.
But I also ask them to constantly question the information they receive, constantly question these systems and that they produce this information to be creative, to be thoughtful, to always consider solutions of how to improve. So for me, it is very important that we use AI to serve humanity and not vice versa.
If we create these systems, they should be there to make us better societies, to make our lives much better quality, rather than for us losing our job or being discriminated when we are looking for employment or being given information which is incorrect and perpetuates stereotypes. The second should not. We should never allow that to happen and we should always be vigilant for that.
The first serving humanity, we should all work towards it to make the system better, better for everyone.
Joanna Shilton:I love that approach. I think everyone should be taking this on board and yeah, I mean, with all that in mind, what, what do you think about the future of AI?
What do you think it holds? Does it, does it excite? Excite you? Scare you?
Anna Paraskevopoulou:I don't think it scares me, but for me personally, I know it scares many people. but I think this question should be considered in relation to your previous question because I don't think we can Think the future of AI independent of our current economic system or climate, independent of our current trends in our society. If we use AI in mind to make as much profit as possible, then many of its negative aspects will continue to exist.
And the environmental costs were also going to be enormous. And that does scare me. I do have to admit.
I do not want to see robots taking our jobs, but I would like to see robots, for example, taking the difficult jobs and make our lives easier, but create new jobs for people to have easier, kind of that don't have so much bad health impact. Some jobs are hazardous, for example. It's better if we have machines doing them than us doing them.
It happened in the past with the Industrial revolution, you know, we should do the same now. But also in the past it was about maximizing profit and people used to live in terrible conditions.
When that happened, we must therefore must use AI to improve the lives of people or the lives of animals, because they also exist on our Earth, the lives of plants, our general ecosystems. Then I think we will see the true benefit of technology and AI.
One that gives us cleaner air to breathe, one that gives us more amicable life, makes society happier than making society stressed and unhappy and unfriendly and isolated. So in short, we need to think more creatively about AI but also in relation to the structures and systems of our today society.
If we want to eliminate inequalities or other disparities that exist, and if we want to promote a fairer society.
Joanna Shilton:Thank you, Anna. That just makes so much sense.
I mean, we've spoken about lots and lots of things today and you know, lots of things for people to take away and think about. I mean, have you got any recommendations for our audience? Anything? Yeah, places they can go or things they should be looking up?
Anna Paraskevopoulou:I think everyone should have a better understanding of what we truly mean to be inclusive, what we truly mean to be accommodating and allowing safe spaces and trustful trusted spaces for people to express their opinions, to express their creativity, to be able to live happily in our society. So to do that I think people should take advantage of training certificates exist, they should be thoughtful, they should read a lot on this topic.
There are excellent podcasts, they are excellent videos that exist. They have been produced.
A lot of academic and policy-oriented research from universities, from government, trade unions have produced quite a lot of material. NGOs. If HR professionals are listening to this podcast, they can look at CAPD materials. They have quite a lot.
A large organisation like UN or ILO, for example, the International labour organisation, environmental groups. There are many, many sources for people to take advantage. There is a lot of media attention at the moment.
There is continuous coverage, and I think this is another source where people can look out for, but above all, for us to have good quality education on these topics and a good understanding of how these developments affect us as individuals, affects us as a community, as society affects also organisations.
Joanna Shilton:Thank you. Well, Anna, if people want to keep in touch with you or to contact you and find out what you're doing, where's the best place for people to find you?
Anna Paraskevopoulou:They can find me through the Anglia Asking website. I also have a LinkedIn profile they can look at that so. So I think that's the best places.
Joanna Shilton:All the links and there. Dr. Anna Paraskevopoulou, thank you for coming.
Anna Paraskevopoulou:On Women with AI well, I would really thank you very. I'd like to thank you very much for the opportunity to have this discussion and the opportunity to appear in your program.
Joanna Shilton:Thank you.
Anna Paraskevopoulou:Take care.