Dear Deeply Readers,

Welcome to the archives of Refugees Deeply. While we paused regular publication of the site on April 1, 2019, we are happy to serve as an ongoing public resource on refugees and migration. We hope you’ll enjoy the reporting and analysis that was produced by our dedicated community of editors and contributors.

We continue to produce events and special projects while we explore where the on-site journalism goes next. If you’d like to reach us with feedback or ideas for collaboration you can do so at [email protected].

What if an Algorithm Decided Whether You Could Stay in Canada or Not?

Canada is rapidly expanding the use of AI in its immigration service. Rights advocates Petra Molnar and Samer Muscati pick apart experiments that will be copied around the world.

Written by Petra Molnar, Samer Muscati Published on Read time Approx. 4 minutes

The detention of migrants at the U.S.-Mexico border in every single case presented; the wrongful deportation of 7,000 foreign students accused of cheating on a language test; racist or sexist discrimination based on social media profile or appearance – what do these seemingly disparate examples have in common? In every case, an algorithm made a decision with serious consequences for people’s lives.

Algorithms and artificial intelligence (AI) are starting to augment human decision-making in Canada’s immigration and refugee system, with significant implications for the fundamental human rights of those subjected to these technologies.

In our new report with the Citizen Lab, we look at how Canada’s use of these tools threatens to create a laboratory for high-risk experiments. These initiatives may place highly vulnerable people at risk of being subjected to unjust and unlawful processes in a way that threatens to violate Canada’s domestic and international human rights obligations, influencing decisions on multiple levels.

Canada has been introducing automated decision-making experiments in its immigration mechanisms… Recent announcements signal an expansion of the uses of these technologies.

Since 2014, Canada has been introducing automated decision-making experiments in its immigration mechanisms, most notably to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Recent announcements signal an expansion of the uses of these technologies in a variety of immigration decisions that are normally made by a human immigration official.

(Credit: University of Toronto)

What constitutes automated decision-making? Our analysis examines a class of technologies that augment or replace human decision-makers, such as AI or algorithms. An algorithm is a set of instructions, a “recipe” designed to organize or learn data quickly and produce a desired outcome. These outcomes can include recommendations, assessments and decisions.

We examined the use of AI in immigration and refugee systems through a critical interdisciplinary analysis of public statements, records, policies and drafts by relevant departments within Canada’s government. While these are new and emerging technologies, the ramifications of using automated decision-making in the immigration and refugee space are far-reaching. Hundreds of thousands of people enter Canada every year through a variety of applications for temporary and permanent status.

The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of human rights, in the form of bias, discrimination and privacy breaches, as well as issues of due process and procedural fairness. These systems will have real-life consequences for ordinary people, many of whom are fleeing for their lives.

Our analysis also relies on principles enshrined in international legal instruments that Canada has ratified, such as the International Covenant on Civil and Political Rights, the International Convention on the Elimination of All Forms of Racial Discrimination, and the Convention Relating to the Status of Refugees, among others. Where the responsibilities of private-sector actors are concerned, the report is informed by the United Nations Guiding Principles on Businesses and Human Rights. We also analyze similar initiatives occurring in Australia and the United Kingdom.

Marginalized and under-resourced communities such as residents without citizenship often have access to less robust human rights protections and lesser legal expertise with which to defend those rights. Adopting AI without first ensuring responsible best practices and building in human rights principles at the outset will exacerbate preexisting disparities and lead to rights violations.

Adopting AI without first ensuring responsible best practices and building in human rights principles at the outset will exacerbate preexisting disparities and lead to rights violations.

We also know that technology travels. Whether in the private or public sector, one country’s decision to implement particular technologies makes it easier for other countries to follow. AI in the immigration space is already being explored in various jurisdictions across the world, as well as by international agencies that manage migration, such as the U.N.

Canada has a unique opportunity to develop international standards that regulate the use of these technologies in accordance with domestic and international human rights obligations. It is particularly important to set a clear example for countries with weaker records on refugee rights and rule of law, as insufficient ethical standards and weak accounting for human rights impacts can create a slippery slope internationally. Canada may also be responsible for managing the export of these technologies to countries more willing to experiment on non-citizens and infringe the rights of vulnerable groups.

It is crucial to interrogate these power dynamics in the migration space, where private-sector interventions increasingly proliferate, as seen in the recent growth of countless apps for and about refugees. However, in the push to make people on the move knowable, intelligible and trackable, technologies that predict refugee flows can entrench xenophobia, as well as encourage discriminatory practices, deprivations of liberty, and denial of due process and procedural safeguards.

With the increasing use of technologies to augment or replace immigration decisions, who actually benefits from these technologies? While efficiency may be valuable, those responsible for human lives should not pursue efficiency at the expense of fairness – fundamental human rights must hold a central place in this discussion. By placing such rights at the center, the careful and critical use of these new technologies in immigration and refugee decisions can benefit both Canada’s immigration system and the people applying to make the country their new home.

Immigration and refugee law is also a useful lens through which to examine state practices, particularly in times of greater border control security and screening measures, complex systems of global migration management, the increasingly widespread criminalization of migration and rising xenophobia. Immigration law operates at the nexus of domestic and international law and draws upon global norms of international human rights and the rule of law.

Canada has clear domestic and international legal obligations to respect and protect human rights when it comes to the use of these technologies and it is incumbent upon policymakers, government officials, technologists, engineers, lawyers, civil society and academia to take a broad and critical look at the very real impacts of these technologies on human lives.

The views expressed in this article belong to the authors and do not necessarily reflect the editorial policy of Refugees Deeply.

Suggest your story or issue.

Send

Share Your Story.

Have a story idea? Interested in adding your voice to our growing community?

Learn more
× Dismiss
We have updated our Privacy Policy with a few important changes specific to General Data Protection Regulations (GDPR) and our use of cookies. If you continue to use this site, you consent to our use of cookies. Read our full Privacy Policy here.