Climate change and other disasters are displacing ever more people. Could artificial intelligence help predict impending crises and where humanitarian aid will be needed? Could algorithms be used to match refugees to regions where they will have the best chance of thriving? And what happens when you take human judgement out of the process, or if data is used to exclude some migrants unjustly?
Hilary Evans Cameron (Toronto Metropolitan University) starts off the discussion with a refugee case to show that human-decision making, itself, can be dangerously unreliable. Then host Maggie Prezyna speaks with experts Ana Beduschi (Exeter University) and Tuba Birca (Vrije Universiteit Brussel), who walk us through what AI is, how it works and what are its risks, pitfalls and potential for good.
Maggie is a researcher with the Canada Excellence Research Chair (CERC) in Migration & Integration program at Toronto Metropolitan University and this new podcast is Borders & Belonging. Maggie will talk to leading experts from around the world and people with on-the-ground experience to explore the individual experiences of migrants: the difficult decisions and many challenges they face on their journeys.
She and her guests will also think through the global dimensions of migrants’ movement: the national policies, international agreements, trends of war, climate change, employment and more.
Borders & Belonging brings together hard evidence with stories of human experience to kindle new thinking in advocacy, policy and research.
Top researchers contribute articles that complement each podcast with a deeper dive into the themes discussed.
Borders & Belonging is a co-production between the Canada Excellence Research Chair in Migration & Integration at Toronto Metropolitan University and openDemocracy. The podcast was produced by LEAD Podcasting, Toronto, Ontario.
Below, you will find links to all of the research referenced by our guests, as well as other resources you may find useful.
‘A helping hand from outer space: Doctors Without Borders utilise satellite data for humanitarian missions’, by Reliefweb (5 October 2020)
‘A Robot Lawyer Is Officially Assisting With Refugee Applications’ by Dom Galeon, Futurism (3 December 2017)
‘Germany to use voice recognition to identify migrant origins’ by BBC, (17 March 2017)
‘How artificial intelligence is changing asylum seekers’ lives for the worse’ by Nicholas Keung, Toronto Star (9 November 2020)
‘Jordan: Is the UN’s biometric registration for Syrian refugees a threat to their privacy?’ by Zoe H. Robbin, Middle East Eye (23 October 2022)
‘Racial discrimination in face recognition technology’ by Alex Najibi, Harvard University (24 October 2020)
‘Refugees in Jordan are buying groceries with eye scans’ by Euronews (04 December 2019)
‘Who is making sure the A.I. machines aren’t racist?’ by Cade Metz, New York Times (15 March 2021)
‘AI-enabled identification management of the German Federal Office for Migration and Refugees (BAMF)’, Migration Data Portal
‘Nadine Project’, European Union’s Horizon 2020 research and innovation programme
‘Project Jetson’, UNHCR
‘The use of digitalisation and artificial intelligence in migration management’, European Migration Network
‘Latvia's free self-check e-tool for citizenship applicants’ by Jānis Reiniks, Republic of Latvia (2022)
‘How AI can help us better prepare for climate migration’ by Injy Elhabrouk, World Economic Forum (10 November 2022).
Agrawal, A., Gans, J., & Goldfarb, A. (2018). ‘Prediction machines: the simple economics of artificial intelligence’. Harvard Business Press.
Salah, A.A., Korkmaz, E.E, Bircan, T. (Eds.). (Forthcoming 2022). ‘Data Science for Migration and Mobility Studies’, Oxford University Press.
Cameron, H. E. (2018). ‘Refugee law's fact-finding crisis: Truth, risk, and the wrong mistake. Cambridge University Press.
Earney, C., & Moreno Jimenez, R. (2019). ‘Pioneering predictive analytics for decision-making in forced displacement contexts’. In, Guide to mobile data analytics in refugee scenarios. Springer.
Beduschi, A & McAuliffe, M. (2022). ‘Artificial Intelligence, migration and mobility: Implications for policy and practice’. World Migration Report.
Beduschi, A. (2019). ‘Digital identity: Contemporary challenges for data protection, privacy and non-discrimination rights’. Big Data & Society.
Beduschi, A (2022). ‘Harnessing the potential of artificial intelligence for humanitarian action: Opportunities and risks’. International Review of the Red Cross.
Beduschi, A. (2021). ‘International migration management in the age of artificial intelligence’. Migration Studies.
Beduschi, A. (2017). ‘The big data of international migration: Opportunities and challenges for states under international human rights law’. Georgetown Journal of International Law.
Bircan, T., & Korkmaz, E. E. (2021). ‘Big data for whose sake? Governing migration through artificial intelligence’. Humanities and Social Sciences Communications.
Cameron, H. E. (2008). ‘Risk theory and ‘subjective fear’: The role of risk perception, assessment, and management in refugee status determinations’. International Journal of Refugee Law.
Cameron, H. E., Goldfarb, A., & Morris, L. (2022). ‘Artificial intelligence for a reduction of false denials in refugee claims’. Journal of Refugee Studies.
Welcome to Borders & Belonging, a podcast that explores issues in global migration and aims to debunk myths about migration based on current research. This series is produced by CERC Migration and openDemocracy. I'm Maggie Perzyna, a researcher with the Canada Excellence Research Chair in Migration and Integration program at Toronto Metropolitan University. Today's episode explores the burgeoning use of artificial intelligence, or AI, as a tool in managing migration and asylum. Two leading researchers will help us understand the risks and opportunities posed by this emerging technology. They'll discuss how AI might affect the civil liberties of migrants, international data flows and more. But first, a former litigator will tell us about her experiences defending refugee claimants and how, theoretically, AI could be used as a force for good in asylum claims. In her 10 years of working as a litigator in Canada, there's one case that Professor Hilary Evans Cameron will never forget.
Hilary Evans Cameron
I had a young Colombian client in a hearing who'd received threatening letters from a guerilla group. And the board member was being really hard on her in the hearing because her response was, well I just try not to think about them. I sort of tried to carry on.
To professor Cameron, it was clear that the board member, in other words, a member of the Immigration and Refugee Board of Canada (IRB), was not familiar with the cultural context that the claimant was coming from, nor was he familiar with current studies on risk response.
Hilary Evans Cameron
One of the key findings of studies of risk response is that the more familiar a risk is, the easier it is to push it to the back of your mind. Right, so we talk about car accident risk as a classic familiarized risk. So, everybody knows that car accidents are a thing that driving a car is dangerous, but it's not hard to put that thought to the back of your mind and get in a car. At the time of my clients hearing there were as many people kidnapped for ransom in Colombia in a given year as died in car crashes in Canada. So that risk of being kidnapped for Colombians at that time was background noise. It was just something that everyone knew someone or knew someone who knew someone that it had happened to. So, you push it to the back of your mind, and you try to keep going. This was essentially her testimony, [it] was yeah of course I was scared but you know, what are you going to do?
But the board member clearly did not understand the Colombian socio-political landscape enough to understand the claimant’s response. And as it turns out, he was not the only one to misinterpret a refugee claimant’s actions.
Hilary Evans Cameron
As a refugee lawyer, I was disturbed by some of the assumptions that I saw board members making. There were assumptions that I thought were probably, you know, not very solid in light of the social science. So, for example, board members would assume, you know, if you were really at risk, you would have fled as soon as the danger arose. And we have at this point, just decades worth of studies of people from, disaster areas, earthquake warnings, floods, fires, you know, all kinds of different dangers that come up. And what is it, why, how is it, that people say, "well, you know what, I think I can ride this out". Or "I'm gonna stay a little while longer and see if it gets better". But we have all kinds of - you know, a deeper understanding of what is it that might explain why somebody doesn't just up and flee at the first opportunity. So as a lawyer, I was pulling together some of this research and submitting it in my clients’ hearings and trying to convince board members to think about those assumptions.
Eventually, Professor Cameron realized that she wanted to go beyond trying to convince board members while on the job, so she got her PhD and began to deepen her research on the process of decision-making and some of the dangers that can come when decisions about refugees are made without enough information.
Hilary Evans Cameron
So, refugee hearings are this paradigm example of decision making under profound uncertainty. You know, these decision makers, not only are they interviewing somebody from a different culture with a very different sort of cultural context, you know, language issues, trauma. Claimants are often testifying through interpreters. I mean, there's just a whole mess of reasons why there's all kinds of potential here for miscommunication and misunderstanding. But in addition, and beyond that, at the end of the day, what the board member is being asked to do is really predict the future. I mean, they're being asked to say, when this person goes home, what's waiting for them there.
To come up with the optimal conditions, professor Cameron first has to lay out some of the problems with the current system. The first element is that decision-makers need to sift through a large package of information to help them understand the situation on the ground. This can be hundreds of pages.
Hilary Evans Cameron
And so, a board member doing that is going to bring their best resources to that game. And they may have plenty of good skills at their disposal, but they're still human. And that's still a very, very difficult task. And one thing we know about this kind of decision-making is that decision makers tend to be overly confident in the conclusions that they reach. And that the data in this case is often very poor data. So, a decision- maker is looking at this weak, sparse data and making a confident and likely poor prediction. What an AI would do is look at that same poor sparse data, be able to make a prediction but along with that prediction would come sort of an explicit statement of how unreliable it is. Because the AI is able to factor in the fact that this is not good data.
Professor Cameron is very clear to note that this is all theoretical, and that she doubts the proper legal systems would ever be in place to allow something like this to come to fruition. Still in theory, she and Goldfarb's ideas on how AI could be used in refugee claims does provide a bit of hope.
Hilary Evans Cameron
This will be a very helpful system, because in other words, at the heart of this refugee status decision-making process should be that idea that it is better by orders of magnitude to give protection to someone who doesn't need it, than to withhold protection from someone who does. And so, if a decision-maker has that legal framework, that normative ethical framework in place, and the AI tells them, look, this is really uncertain. In other words, the points that you're asking us to decide or points that we don't think you can reliably know, then that should allow more people who need protection to get it.
Hillary Evans Cameron is an assistant professor at Toronto Metropolitan University's Lincoln Alexander School of Law. Many thanks to her for sharing her expertise with us. Now, let's take a deeper look at the role artificial intelligence plays in managing migration. Joining me is Ana Beduschi, professor of law at Exeter University in the UK, and Tuba Bircan, research professor at the department of sociology and research coordinator of the Interface Demography Research group at Vrije University in Brussels. Thanks to you both for joining me!
AI is often thought of as robots and self-driving cars. But the truth is it's embedded into our everyday lives. What exactly is AI?
That's a very good question. And I would say there is no single straightforward answer to that because currently there is no internationally agreed definition of AI. But we could say that AI can be broadly understood as a collection of technologies that combine data algorithms and computing power. If we follow the definition that is given by the European Commission, for example, these technologies will consist of software, but also hardware systems that are designed by humans - and that's an important point, that they are designed by humans - and that if they're given a complex goal, they can act in the physical or in the digital dimension, perceive their environment through data acquisition, interpret the collected data that can be structured or unstructured and deriving from this data they will decide on best courses of actions to take to achieve a given goal. Broadly speaking, these are systems that are designed by humans, and these are technologies that we have in our daily lives. So, as I said, these are technologies that are embedded in smart assistants, in our mobile phones, or in our virtual home systems.
So, Tuba, just building on what Ana said, when we're talking about AI, are we talking about computers understanding?
Very, very, very important...