litigation

When AI starts making administrative decisions

Law360 Canada
 

One of the tenets of procedural fairness is the right to reasons for an administrative decision. Canadian tribunals and decision-makers have an obligation to explain why a particular outcome is reached and the rationale for it. 

The use of artificial intelligence (AI) as a tool in tribunal adjudication is challenging the common law standard for what amounts to adequate reasons for decision. 

For example, does the right to reasons established in the Supreme Court of Canada's 1999 decision, Baker v. Canada (Minister of Citizenship and Immigration), [1999] 2 S.C.R. 817, include the duty that a human tribunal member draft or arrive at the decision in question? To what extent can a tribunal delegate its decision-making authority to AI, in the interests of efficiency and to avoid delay? Will courts scrutinize the inherent or potential biases of AI that could taint a particular tribunal result? 

Most of these questions remain unresolved.

A recent decision of the Federal Court, Haghshenas v. Canada (Minister of Citizenship and Immigration), 2023 FC 464, sheds light on how Canadian courts may approach the fairness or reasonableness of administrative decisions written with the assistance of AI.

A powerful Chinook 

Haghshenas involved an application for judicial review of a decision by an immigration officer (the officer) at the Canadian Embassy in Turkey. The officer denied the applicant a work permit designed for entrepreneurs and self-employed foreign nationals seeking to operate a business in Canada (the work permit). 

One of the requirements for the work permit under paragraph 200(1)(b) of Canada's Immigration and Refugee Protection Regulations, SOR/2002-227 (the Regulations) is that the officer be satisfied that the applicant "will leave Canada by the end of the period authorized for their stay." 

In this case, the officer concluded that the applicant would not leave Canada at the end of their stay under the work permit. That is, the applicant's intended aspiration of starting an elevator/escalator business in Canada did "not appear reasonable" given the speculative revenue projections for the business and the fact that the company had not obtained the appropriate licences, among other reasons. 

In reaching this decision, the officer employed Chinook, a Microsoft Excel-based tool developed by Immigration, Refugees and Citizenship Canada (IRCC). 

According to the IRCC website, Chinook helps with "temporary resident application processing to increase efficiency and to improve client service," with the goal of assisting in the backlog of work permit applications. It "does not utilize artificial intelligence (AI), nor advanced analytics for decision ­making, and there are no built-in decision-making algorithms."

These statements notwithstanding, the applicant challenged the officer's use of Chinook on judicial review, arguing that employing AI to reach an administrative decision was both procedurally unfair and substantively unreasonable. 

The Federal Court dismissed the applicant's position, largely on what appears to be an assumption that the Chinook tool constitutes a form of AI. In so doing, the court hinted at a number of important principles about how it may scrutinize the use of AI in administrative decision-making in future. 

1. Decisions made by human decision-makers are not procedurally unfair.

In rejecting the argument that the use of AI was procedurally unfair, the court appears to have drawn a line in the sand about the proper role of AI mechanisms in administrative decision-making. The court noted that in the applicant's case, AI did not reach the final decision regarding his work permit - the officer did.

Inherent in the court's reasoning is the presumption that it is procedurally fair for AI to assist an administrative decision-maker in rendering reasons for decision. AI assists the administrative state in the goal of promoting more efficient and timely outcomes. 

What appears unfair, however, is the state's delegation of its decision-making authority to AI. The court held: 

As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of "Chinook" software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with [the Supreme Court of Canada's decision in] Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance. 

2. It is not unreasonable to use Al in administrative decision-making.

Separate from the issue of whether the use of AI is procedurally unfair, the court also rejected the argument that the officer's reliance on Chinook rendered the decision substantively unreasonable. 

According to the court, there is nothing inherently unreliable or ineffective about the use of Al, at least in this particular case. The court did not deem it necessary to delve into the inner workings of the Chinook software to determine if its mechanics were inappropriate or would lead to unreasonable results in the immigration assessment process: 

Regarding the use of the "Chinook" software, the Applicant suggests that there are questions about its reliability and efficacy ... the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness and found the use of [AI] is irrelevant ... 

So, in this particular context, the government's use of AI survived reasonableness scrutiny. 

Will Al replace tribunal decision-making? 

The court's approach above reflects a willingness to accept machine learning as a limited component of administrative decision-making, with the caveat that ultimate adjudicative authority must reside in a human tribunal. 

Haghshenas just scratches the surface, however, about the implications of the Canadian administrative state delegating its roles and responsibilities to machine learning in the interests of efficiency. 

As we are learning, AI comes with its own set of inherent biases and problems. 

There will no doubt be new circumstances in which an enterprising lawyer will argue that the tribunal's reliance on AI tainted the outcome of the decision or rendered it procedurally unfair. Tribunals and agencies across Canada must therefore approach the question of whether and how to adopt AI in the decision-making process with a degree of caution and with significant legal and ethical training. This is the only way to ensure that the use of AI remains a fair and reasonable tool in administrative adjudication. 

Marco P. Falco is a partner in the litigation department at Torkin Manes LLP in Toronto who focuses on judicial review and civil appeals. He is also co-chair of Torkin Manes' diversity and inclusion committee. 

This article was originally published by Law360 Canada, part of LexisNexis Canada Inc.