Generative Artificial Intelligence (AI): Canadian Government … – JD Supra

The Canadian government continues to take note and react to the widespread use of generative artificial intelligence (AI). Generative AI is a type of AI that generates output that can include text, images or other materials, and is based on material and information that the user inputs (e.g., ChatGPT, Dall-E 2 and Midjourney). In recent development, the Canadian government has: (1) opened up consultation on a proposed Code of Practice (the Code) and provided a proposed framework for the Code;1and (2) published a Guide on the use of Generative AI for federal institutions on September 6th, 2023 (the Guide).2

More generally, as discussed below, as Canadian companies continue to adopt generative AI solutions, they may take note of the initial framework set out for the Code, as well as the information in the Guide, in order to minimize risk and ensure compliance with future AI legislation. A summary of the key points of the proposed Code and Guide is provided below.

The Code is intended for developers, deployers and operators of generative AI systems to avoid harmful impacts of their AI systems and to prepare for, and transition smoothly into, future compliance with the Artificial Intelligence and Data Act (AIDA),3should the legislation receive royal assent.

In particular, the Government has stated that it is committed to developing a code of practice, which would be implemented on a voluntary basis by Canadian firms ahead of the coming into force of AIDA.4For a detailed look into what future AI regulation may look like in Canada, please refer to our blog, Artificial IntelligenceA Companion Document Offers a New Roadmap for Future AI Regulation in Canada.

In the process of developing the Code, the Canadian government has set out a framework for the Code, and has now opened consultation on this framework. To that end, the government is requesting comments on the following elements of the proposed framework:

In the proposed framework for the Code, developers and deployers would be asked to identify ways that their system may attract malicious use (e.g., impersonate real individuals) and take steps to prevent such use from occurring.

Additionally, developers, deployers and operators would be asked to identify the ways that their system may attract harmful inappropriate use (e.g., use of a large language model for medical or legal advice) and again, take steps to prevent this inappropriate from occurring.

To this end, it would be suggested by the Code that developers assess and curate datasets to avoid low-quality data and non-representative datasets/biases. Further, developers, deployers and operators would be advised to implement measures to assess and mitigate risk of biased output (e.g., fine-tuning).

Accordingly, a future Code would recommend that developers and deployers provide a reliable and freely available method to detect content generated by the AI system (e.g., watermarking), as well as provide a meaningful explanation of the process used to develop the system (e.g., provenance of training data, as well as measures taken to identify and address risks).

Additionally, operators would be asked to ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.

A future Code would potentially recommend that deployers and operators of generative AI systems provide human oversight in the deployment and operations of their system. Further, developers, deployers and operators would be asked to implement mechanisms to allow adverse impacts to be identified and reported after the system is made available.

In this vein, a future Code would recommend that developers use a wide variety of testing methods across a spectrum of tasks and contexts (e.g., adversarial testing) to measure performance and identify vulnerabilities. As well, developers, deployers and operators would be asked to employ appropriate cybersecurity measures to prevent or identify adversarial attacks on the system (e.g., data poisoning).

Developers, deployers and operators of generative AI systems may therefore ensure that multiple lines of defence are in place to secure the safety of their system, such as ensuring that both internal and external (independent) audits of their system are undertaken before and after the system is put into operation; and develop policies, procedures and training to ensure that roles and responsibilities are clearly defined, and that staff are familiar with their duties and the organization's risk management practices.

Accordingly, as the framework for the Code evolves through the consultative process, it is expected that it will ultimately provide a helpful guide for Canadian companies involved in the development, deployment and operation of generative AI systems as they prepare for the coming-into-force of AIDA.

The Guide is another example of the Canadian government accounting for the use of generative AI. The Guide provides guidance to federal institutions and their employees on their use of generative AI tools, including identifying challenges and concerns relating to its use, putting forward principles for using it responsibly, and offering policy considerations and best practices.

While the Guide is intended for federal institutions, the issues it addresses may have more universal application to the use of generative AI systems, broadly. Accordingly, organizations may consider referring to the Guide as a guiding template, while developing their own internal AI policies for use of generative AI.

In more detail, the Guide identifies challenges and concerns with the use of generative AI, including the generation of inaccurate or incorrect content (known as "hallucinations") and/or the amplification of biases. More generally, the government notes that generative AI may pose "risks to the integrity and security of federal institutions."8

To mitigate these challenges and risks, the Guide recommends that federal institutions adopt the "FASTER" approach:

Organizations may take heed of the FASTER approach as a potential guiding framework to the development of their own policies on the use of generative AI.

Various other highlights noted by the Guide include the following:

In view of the foregoing, Canadian companies exploring the use of generative AI may take a note of the FASTER principles set out by the Guide, as well as the various best practices proposed.

Taken together, the Code and the Guide provide helpful guidance for organizations who wish to be proactive as they develop their AI policies and ensure they are compliant with AIDA should it receive royal assent.

1Government of Canada, Canadian Guardrails for Generative AI Code of Practice, last modified 16 August 2023 ["Consultation Announcement"].

2Government of Canada, Guide on the use of Generative AI, last modified 6 September 2023 ["The Guide"].

3Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021 (second reading completed by the House of Commons on 24 April 2023).

4Consultation Announcement.

5Consultation Announcement.

6Consultation Announcement.

7Consultation Announcement.

Read the original post:
Generative Artificial Intelligence (AI): Canadian Government ... - JD Supra

Related Posts

Comments are closed.