Regulating Generative Artificial Intelligence: Balancing Innovation … – Lexology

Introduction

In a matter of months, generative artificial intelligence (AI) has been adopted ravenously by the public, thanks to programs like ChatGPT. The increasing use (or proposed use) of generative AI by organizations has presented a unique challenge for regulators and governments across the globe. The balance between fostering innovation while mitigating risks associated with the technology is a challenge that different lawmakers are trying to strike. This article summarizes some of the key legislation or proposed legislation around the world that tries to strike that balance.

AI Regulation in Canada

1.Current Law

While Canada does not have an AI-specific law yet, Canadian lawmakers have taken steps to address the use of AI in the context of so-called automated decision-making. Qubecs private sector law, as amended by Bill 64 (the Qubec Privacy Law), is the first piece of legislation in Canada to explicitly regulate automated decision-making. The Qubec Privacy Law imposes a duty on organizations to inform individuals when a decision is based exclusively on automated decision-making.

Interestingly, this duty to inform individuals about automated decision-making is also found in Bill C-27, the federal bill to overhaul the federal private sector legislation. Bill C-27 imposes obligations on organizations around automated decision systems. Organizations that use personal information to inform their automated decision systems to make predictions about individuals are required to:

In addition to the privacy reforms, the third and final part of Bill C-27 introduced Canadas first every AI-specific legislation, which is discussed in the next section.

On June 16, 2022, Canadas Minister of Innovation, Science and Industry (Minister) tabled the Artificial Intelligence and Data Act (AIDA), Canadas first attempt to formally regulate certain artificial intelligence systems as part of the sweeping privacy reforms introduced by Bill C-27.

Under AIDA, a person (which includes a trust, a joint venture, a partnership, an unincorporated association, and any other legal entity) who is responsible for an AI system must assess whether an AI system is a high-impact system. Any person who is responsible for a high-impact system then, in accordance with (future) regulations, must:

It should be noted that harm under AIDA means physical or psychological harm to an individual, damage to an individuals property, or economic loss to an individual.

If the Minister has reasonable grounds to believe that the use of a high-impact system by an organization or individual could result in harm or biased output, the Minister has a variety of remedies at their disposal.

You can read more about AIDA here.

Key AI Regulation, Frameworks, or Guidance Across the Globe

As of the date of the writing of this article, on an international scale, AI-specific laws are few and far between. AI regulation in most countries simply derives from already existing privacy and technology laws that do not explicitly address AI or automated decision-making. Nevertheless, some counties have made notable progress in addressing the dawn of AI. For example, on June 14, 2023, the European Union (EU) based the AI Act, becoming the world's first comprehensive AI law.

The EUs new AI Act establishes obligations for providers and users depending on the level of risk from AI. It will be interesting to see whether countries will adopt a similar risk-based approach as they develop their own AI laws.

The following chart is a summary of the progress various countries have made in developing AI-specific legislation:

Evidently, the EU is leading the pack while China and Brazil follow closely behind. The regulation of generative AI in so many of these documents shows the increasing alertness towards AI-driven tools such as ChatGPT.

Interestingly, while potential legislation addressing AI is developing slowly in the United States, some states have already gone ahead and drafted their own state-specific regulations. In California, for example, Bill AB 331 will amend their current Business and Professions Code to require impact assessments for automated decision tools and require certain obligations in accordance with the results of those assessments.[9] Individual state efforts such as Californias, show a growing recognition as to just how dire the need is to regulate this technology.

Takeaways

On a global scale, awareness of the risks associated with AI and generative models such as ChatGPT is evidently increasing. The inherent complexity and unpredictability of AI and its corresponding tools and models make regulating its use an ongoing challenge. Finding the perfect balance between allowing AIs benefits to thrive, such as in medicine with early detection and diagnosis of diseases, with combatting AIs risks, such as bias and discrimination, remains ambiguous.

While AIDA has yet to be made into official law in Canada, businesses who are using (or are planning to use) AI and its various tools and models should be prepared to comply with the upcoming AI laws. Here are some recommendations that organizations should adopt to get ahead of the upcoming AI laws, such as AIDA:

The core of any AI compliance framework should be the incorporation of privacy-by-design and ethics-by-design concepts into the framework. This means that data protection and ethical features will be integrated into the organizations system of engineering, practices and procedures. These features will likely allow an organization to adapt to changing technology and regulations.

Read more from the original source:
Regulating Generative Artificial Intelligence: Balancing Innovation ... - Lexology

Related Posts

Comments are closed.