Jun 20, 2023

Regulating Generative Artificial Intelligence: Balancing Innovation and Risks*

By Roland Hung


In a matter of months, generative artificial intelligence (“AI”) has been adopted ravenously by the public, thanks to programs like ChatGPT. The increasing use (or proposed use) of generative AI by organizations has presented a unique challenge for regulators and governments across the globe. The balance between fostering innovation while mitigating risks associated with the technology is a challenge that different lawmakers are trying to strike. This article summarizes some of the key legislation or proposed legislation around the world that tries to strike that balance.  

AI Regulation in Canada

  1. Current Law

    While Canada does not have an AI-specific law yet, Canadian lawmakers have taken steps to address the use of AI in the context of so-called “automated decision-making.” Québec’s private sector law, as amended by Bill 64 (the “Québec Privacy Law”), is the first piece of legislation in Canada to explicitly regulate “automated decision-making”. The Québec Privacy Law imposes a duty on organizations to inform individuals when a decision is based exclusively on automated decision-making. 

    Interestingly, this duty to inform individuals about “automated decision-making” is also found in Bill C-27, the federal bill to overhaul the federal private sector legislation. Bill C-27 imposes obligations on organizations around automated decision systems. Organizations that use personal information to inform their automated decision systems to make predictions about individuals are required to:

  • Deliver a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them; and
  • Retain the personal information related to the decisions for sufficient periods of time to permit the individual to make a request for access.

In addition to the privacy reforms, the third and final part of Bill C-27 introduced Canada’s first every AI-specific legislation, which is discussed in the next section.

  1. Bill C-27: The Digital Charter Implementation Act

On June 16, 2022, Canada’s Minister of Innovation, Science and Industry (“Minister”) tabled the Artificial Intelligence and Data Act (“AIDA”), Canada’s first attempt to formally regulate certain artificial intelligence systems as part of the sweeping privacy reforms introduced by Bill C-27.

Under AIDA, a person (which includes a trust, a joint venture, a partnership, an unincorporated association, and any other legal entity) who is responsible for an AI system must assess whether an AI system is a “high-impact system. Any person who is responsible for a high-impact system then, in accordance with (future) regulations, must:

  1. Establish measures to identify, assess and mitigate risks of harm or biased output that could result from the use of the system (“Mitigation Measures”);
  2. Establish measures to monitor compliance with the Mitigation Measures;
  3. Keep records in general terms of the Mitigation Measures (including their effectiveness in mitigating any risks of harm/biased output) and the reasons supporting whether the system is a high-impact system;
  4. Publish, on a publicly available website, a plain language description of the AI system and how it is intended to be used, the types of content that it is intended to generate, and the recommendations, decisions, or predictions that it is intended to make, as well as the Mitigation Measures in place and other information prescribed in the regulations (there is a similar requirement applicable to persons managing the operation of such systems); and
  5. As soon as feasible, notify the Minister if use of the system results or is likely to result in material harm.

It should be noted that “harm” under AIDA means physical or psychological harm to an individual, damage to an individual’s property, or economic loss to an individual.

If the Minister has reasonable grounds to believe that the use of a high-impact system by an organization or individual could result in harm or biased output, the Minister has a variety of remedies at their disposal. 

You can read more about AIDA here.  

Key AI Regulation, Frameworks, or Guidance Across the Globe

As of the date of the writing of this article, on an international scale, AI-specific laws are few and far between. AI regulation in most countries simply derives from already existing privacy and technology laws that do not explicitly address AI or automated decision-making. Nevertheless, some counties have made notable progress in addressing the dawn of AI. For example, on June 14, 2023, the European Union (“EU”) based the AI Act, becoming the world's first comprehensive AI law.  

The EU’s new AI Act establishes obligations for providers and users depending on the level of risk from AI. It will be interesting to see whether countries will adopt a similar risk-based approach as they develop their own AI laws.

The following chart is a summary of the progress various countries have made in developing AI-specific legislation:


Laws, Regulations or Frameworks


European Union

AI Act – June 2023 (draft passed in Parliament; negotiations at Council of the EU ensuing)

Passed on June 14, 2023, the EU’s monumental draft act aims to ensure that AI systems are safe, transparent, traceable, non-discriminatory, environmentally friendly and overseen by people rather than automation.[1] Similar to Canada’s proposed legislation, the extent to which AI systems would be regulated is based on the level of risk posed by the system, from limited risk to unacceptable risk. Generative AI, such as ChatGPT, would have to comply with certain transparency requirements.[2]

United Kingdom

AI regulation: a pro-innovation
– March 2023 (framework for future efforts)

Taking a slightly different approach to its counterparts, the UK is taking a pro-innovation approach to future AI regulation efforts with an emphasis on regulating use rather than the technology itself. It is guided by five principles: safety, transparency, fairness, accountability and contestability and redress.[3] An actual draft form of regulations has yet to enter Parliament.


Bill No. 2338 – May 2023

This bill, prepared by a commission of jurists instituted especially for the purposes of AI regulation in Brazil, establishes national rules for the development, implementation and responsible use of AI. It aims to reconcile the protection of rights and fundamental freedoms, appreciation of work, the dignity of the person, and technological innovation represented by AI.[4] AI systems are ranked by risk level, with “excessive” risk systems being completely banned. The bill is currently under deliberation.

United States

Blueprint for an AI Bill of Rights – October 2022 (voluntary framework)


Algorithmic Accountability Act – February 2022 (draft bill)

The voluntary framework applies to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services”.[5] Where sector-specific privacy laws and oversight requirements do not already provide guidance, the Blueprint is meant to inform decisions where such guidance is needed. The voluntary framework follows a draft bill tabled in 2022, which takes a similar approach to the UK in its open-ended and more lenient provisions. While 2023 updates on either document have been scarce, there have been multiple voluntary frameworks introduced since such as the AI Risk Management Framework.[6]


AI Standards Roadmap – March 2020 (framework for future efforts)

In 2019, Australia’s Prime Minister acknowledged that the country has “not been as involved as they could be” in regard to regulating AI. In response to this lack of effort, this framework was established, aiming to provide guidance for Australians to help develop standards for AI internationally.[7]


Administrative Measures for Generative Artificial Intelligence Services – April 2023 (draft regulations to come into force in 2023)

These draft regulations aim to “promote the healthy development and standardized application of generative AI” and will apply to research, development, and utilization of AI.[8] “Generative AI” in this draft refers to technologies that generate text, pictures, sounds, videos, codes and other content based on algorithms, models, and rules, which inevitably encompasses tools such as ChatGPT. AI systems are not ranked by risk level, making these regulations broad and all-encompassing.


Evidently, the EU is leading the pack while China and Brazil follow closely behind. The regulation of generative AI in so many of these documents shows the increasing alertness towards AI-driven tools such as ChatGPT.

Interestingly, while potential legislation addressing AI is developing slowly in the United States, some states have already gone ahead and drafted their own state-specific regulations. In California, for example, Bill AB 331 will amend their current Business and Professions Code to require impact assessments for automated decision tools and require certain obligations in accordance with the results of those assessments.[9] Individual state efforts such as California’s, show a growing recognition as to just how dire the need is to regulate this technology.


On a global scale, awareness of the risks associated with AI and generative models such as ChatGPT is evidently increasing. The inherent complexity and unpredictability of AI and its corresponding tools and models make regulating its use an ongoing challenge. Finding the perfect balance between allowing AI’s benefits to thrive, such as in medicine with early detection and diagnosis of diseases, with combatting AI’s risks, such as bias and discrimination, remains ambiguous.

While AIDA has yet to be made into official law in Canada, businesses who are using (or are planning to use) AI and its various tools and models should be prepared to comply with the upcoming AI laws. Here are some recommendations that organizations should adopt to get ahead of the upcoming AI laws, such as AIDA:

  • Build a principle- and risk-based AI compliance framework that can evolve with the technology and regulatory landscape. The framework should be built with input from both internal and external stakeholders.
  • Part of the framework should set out clear guidelines around the responsible, transparent and ethical usage of AI technology.
  • Conduct a privacy and an ethics impact assessment for the use of new AI technology. The assessment should answer the following questions:
    • What type of personal information will the AI technology collect, use and disclose?
    • How will the personal information be used by the AI technology?
    • Will the data set lead to any biases in the output?
    • What risks are associated with the AI technology’s collection, use and disclosure of the personal information?
    • Will there be any human involvement in the decision-making?

The core of any AI compliance framework should be the incorporation of privacy-by-design and ethics-by-design concepts into the framework. This means that data protection and ethical features will be integrated into the organization’s system of engineering, practices and procedures. These features will likely allow an organization to adapt to changing technology and regulations.

For more information about the legal implications of the use of ChatGPT or other AI technology, please contact Roland Hung of Torkin Manes’ Technology, Privacy & Data Management Group.


*The author would like to thank Torkin Manes’ Articling Student Yasmin Thompson for her invaluable contributions in preparing this insight.

[1] “EU AI Act: first regulation on artificial intelligence” (14 June 2023) online: European Parliament <https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence>

[2] Ibid.

[3] Secretary of State for Science, Innovation and Technology “AI regulation: a pro-innovation approach” (2023) at 6, online (pdf): UK, Department for Science, Innovation & Technology <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146542/a_pro-innovation_approach_to_AI_regulation.pdf>

[4] Senator Rodrigo Pacheco, “Bill N° 2338: Dispõe sobre o uso da Inteligência Artificial” (2023) at 29, online (pdf): Senado Federal <https://legis.senado.leg.br/sdleg-getter/documento?dm=9347622&ts=1684441712955&disposition=inline&_gl=1*dfh5iw*_ga*NTE3Mjg1OTU4LjE2ODY3NzAyMzY.*_ga_CW3ZH25XMK*MTY4Njc3MDIzNS4xLjAuMTY4Njc3MDIzNS4wLjAuMA..>

[5] Office of Science and Technology, “Blueprint for an AI Bill of Rights” (2022) at 8, online (pdf): The White House <https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf>

[6] “AI Risk Management Framework: Second Draft” (2022) online (pdf): National Institute of Standards and Technology <https://www.nist.gov/itl/ai-risk-management-framework>

[7] “An Artificial Intelligence Standards Roadmap” (2020) online (pdf): Standards Australia <https://www.standards.org.au/getmedia/ede81912-55a2-4d8e-849f-9844993c3b9d/O_1515-An-Artificial-Intelligence-Standards-Roadmap-soft_1.pdf.aspx>

[8] “生成式人工智能服务管理办法(征求意见稿)” (2023) online (pdf): Cyberspace Administration of China <http://www.cac.gov.cn/2023-04/11/c_1682854275475410.htm>

[9] Bauer-Kahan et al., “Assembly Bill No. 331” (14 June 2023) online: California Legislative Information <https://leginfo.legislature.ca.gov/faces/billStatusClient.xhtml?bill_id=202320240AB331>