The Ethics of Using Generative Artificial Intelligence in the Practice … – IPWatchdog.com

For those who are destined to use generative AI tools like ChatGPT, the ethical concerns presented are not insurmountable but do require practitioners to engage in serious forethought and consultation with clients prior to use.

The use of Artificial Intelligence (AI) has taken center stage in popular culture thanks to the significant advances of tools like ChatGPT. Of course, the use of these new, high-powered AI tools presents real issues for businesses of all types and all sizes. Notably, Samsung employees shared confidential information with ChatGPT while using the chatbot at work. Subsequently, Samsung decided to restrict the use of generative AI tools on company-owned devices and on any device with access to internal networks. Concerned about the loss of confidential information, Apple has likewise restricted employees from using ChatGPT and other external AI tools.

The actual or potential loss of confidential information is a matter of critical importance to technology companies, but it also must be of the utmost concern for all attorneys who have an ethical obligation to keep client information confidential.

The confidentiality concerns presented when using generative AIwhich is a particular type of AI that can produce various types of content when promptedmust be thoroughly understood and appreciated. For example, do you know whether the AI tool will use the information provided on a going forward basis for the purpose of training the AI model?

According to ChatGPT, information submitted through the OpenAI API is not used to train the OpenAI models or to improve OpenAIs service offerings. However, data submitted through non-API consumer services such as ChatGPT can be used on a going forward basis to improve the model. So, once information is submitted through ChatGPT, the AI can use the information to inform itself and answer the queries of others, which almost certainly means that information is no longer a trade secret. And it would be a gigantic failure of one of the most basic ethical requirements if such information were shared by an attorney or patent practitioner ethically required to maintain such information in confidence.

Recall that Rule 1.6(a) of the American Bar Association (ABA) Model Rules of Professional Conduct prohibits a lawyer from revealing information relating to the representation of a client unless the client gives informed consent The U.S. Patent and Trademark Office (USPTO) Rules of Professional Conduct similarly prohibit patent practitioners from revealing information relating to the representation of a client unless the client gives informed consent See 37 CFR 11.106(a). Both Rule 1.6 and Rule 11.106 further require practitioners to make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. Of course, the USPTO Rules do have a caveat not present in the ABA Model Rules, which requires practitioners to disclose to the Office information necessary to comply with applicable duty of disclosure provisions. See 37 CFR 11.106(c).

Whether or how one can comply with Rule 11.106(c) when generative AI tools are used remains an open question. For example, how do you know where the information came from if you rely on information from ChatGPT? Without knowing where the information provided by ChatGPT comes from it is impossible to know what you may be incorporating into a patent disclosure. If the practice of blanket incorporation by reference under 37 CFR 1.57 is problematic (e.g. because you dont take the time to really know what is being included that may or may not be contradictory) the blind incorporation of material provided by a generative AI tool like ChatGPT could be catastrophic. Is the information being provided culled from competitors and you are about to include that information into your patent application? What will that mean for ownership down the road if any of that information is relied upon in a claim? Would it be possible for a competitor to demonstrate that what you included was derived from information ChatGPT included in its corpusfrom whatever sourceand since you have used it in a claim there is an unidentified inventor lurking that could, through successful petition or lawsuit, later be found to be a co-inventor, thereby sharing ownership interests?

Of course, even given the confidentiality risk, loss of rights to trade secrets, and not knowing where the provided information came from, which all go along with the use of a generative AI tool like ChatGPT, it can still be a very tempting tool. When searching for ways to describe an innovation accurately, completely, and creatively, generative AI tools can be used to accelerate the acquisition of information and data, and even provide text relative to some aspects of an innovationhopefully limited to whatever discussion of the prior art you will provide in the background, or to provide context to the reader so as to demonstrate the benefits provided by the innovation. And in a world where both the Federal Circuit and Supreme Court continually demand more disclosure in patent applications, and clients simultaneously demand patent professionals do more for less money, everyone is looking for ways to cut corners without crippling quality. So, from a risk-reward perspective, the use of generative AI tools may be too beneficial to pass up.

For those who are destined to use generative AI tools like ChatGPT, the ethical concerns presented are not insurmountable but do require practitioners to engage in serious forethought and consultation with clients prior to use.

ABA Rule 1.4 requires a lawyer to promptly inform the client of any decision or circumstance with respect to which the clients informed consent is required See Rule 1.4(a)(1). A lawyer is also required to explain a matter to the extent reasonably necessary to permit the client to make informed decisions See Rule 1.4(b). Lawyers are also required to reasonably consult with the client about the means by which the clients objectives are to be accomplished. See Rule 1.4(a)(2). The USPTO Rule37 CFR 1.104mimics the ABA Rule.

Understanding how generative AI solutions collect, store and use the information that will be provided is a prerequisite for the informed consideration of issues like confidentiality, and ultimately communicating with clients to obtain fully informed consent. For example, as you use a particular AI solution to craft a portion of a description for a patent application, will that AI internalize the information provided and continue to include that information as part of its corpus that it will learn from and pull from when engaging with future users? As already mentioned, ChatGPT sometimes will use that information, and sometimes it wont. Knowing the specific terms and conditions associated with the use of the AI tool is required to reasonably consult with the client and explain the matter to the extent necessary to make an informed decision whether to authorize the use of generative AI tools. See ABA Rule 1.4 and USPTO Rule 11.104.

And perhaps on the most basic level, the rules of professional conduct require competence. A lawyer shall provide competent representation to a client, reads ABA Rule 1.1 and is mimicked by USPTO Rule 11.101. With competence being defined as requiring legal, scientific and technical knowledge, skill, thoroughness and preparation reasonably necessary for the representation. Rule 11.101, which is similar to ABA Rule 1.1 except for the general ABA provision does not mandate scientific and technical knowledge.

Emphasizing the requirement that practitioners exhibit the requisite level of competence may seem silly, or even trite, but it is worth remembering that generative AI tools like ChatGPT, while remarkable in their abilities, are not foolproof or flawless. For example, I had a conversation with ChatGPT through which the topic of the Patent Trial and Appeal Board (PTAB) arose. ChatGPT referred to the judges as Administrative Law Judges, which is inaccurate. The judges who make up the PTAB are Administrative Patent Judges, not ALJs. Perhaps a small distinction on its face, but as the conversation proceeded, this fundamental misunderstanding led ChatGPT to additional erroneous conclusions, including that there is a requirement that PTAB judges have seven years of legal experience to be hired. Indeed, many dozens of APJs without even five years of legal experience have been hired by the USPTO to be PTAB judges. And when asked where this erroneous information was coming from, ChatGPT refused to answer the question. Still, the answers provided were given with an air of authority and credibility that it would be easy for someone not completely and thoroughly knowledgeable to succumb to the erroneous information being provided.

The duty of competent representation almost certainly requires more of lawyers and patent practitioners than generative AI tools like ChatGPT are currently capable of providing. This does not mean that it cannot be useful, assuming the hurdles relating to confidentiality and informed client consent have been addressed, but the blind use of ChatGPT by licensed professionals almost certainly will not be sufficient to rise to the level of expected competence by ethics officials. In other words, arguing that the information provided by ChatGPT seemed credible and reliable will likely not satisfy the threshold of competence expected should things go terribly wrong, which is generally the prerequisite to an ethical investigation and broader ethical inquiry.

Finally, whatever decisions are made by the practitioner and firm with respect to the appropriate protocol for considering use of generative AI, communicating with clients to get informed consent and verifying the information, it is critical that all managing practitioners remember they have a duty to supervise those who work for them, as well as non-attorneys and non-practitioners employed or engaged to facilitate the representation of the client. See ABA Rule 5.1 and 5.2, and USPTO Rule 11.501 et seq.

Image Source: Depoist PhotosImage ID: 651971872Author: Primakov

See more here:
The Ethics of Using Generative Artificial Intelligence in the Practice ... - IPWatchdog.com

Related Posts

Comments are closed.