What Does Generative AI Mean for the Justice System? (Part 2) – Government Technology

Courts need to consider not only their own use of generative AI, but also potential use by lawyers and other parties submitting evidence.

Lawyers may use the technology for help with research or drafting documents, for example, and over-reliance can be risky, because generative AI is known to sometimes make up false information. Some companies in the legal space, however, are betting that issue lies more with general AI tools. Theyve been announcing specialized models trained on legal texts, in efforts to reduce fabrications.

Judges also should be alert to other kinds of risks that could emerge from the technology, such as highly convincing AI-created photos, audio or video that could be entered as evidence. At present, these deepfakes may be difficult to detect, although several AI companies have made voluntary promises to develop a system for distinguishing AI-generated media.

This is, perhaps, an unsurprising outcome. Todays general-purpose generative AI tools, including ChatGPT, are designed to write well-structured sentences, not produce accurate information, said Chris Shenefiel, cyber law researcher at the Center for Legal and Court Technology at William & Mary Law School.

Its designed to predict, given a topic or sentence, what words or phrases should come next, Shenefiel said. ... It can fall down, because it doesnt validate the truth of what it says, just the likelihood of whats to come next.

Retired D.C. Superior Court Judge Herbert B. Dixon recently detailed his own experiences playing with ChatGPT and discovering that it listed inaccurate citations. Dixon tried to determine whether one was completely invented or only misattributed, before finally giving up: I spent more time trying to track down the source of that quote than writing this article, he wrote.

Dixon concluded, Users must exercise the same caution with chatbot responses as when doing Internet research, seeking recommendations on social media, or reading a breaking news post from some unfamiliar person or news outlet. Dont trust; verify before you pass along the output.

Some courts have already implemented rules around use of generative AI.

One Texas judge issued a directive requiring attorneys to either attest that theyd validated AI-generated content through traditional methods, or that theyd avoided using the tool.

These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them, Judge Brantley Starr wrote. These platforms in their current states are prone to hallucinations and bias . While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.

Scott Schlegel, a Louisiana District Court judge, said he understands why some judges would want policies mandating disclosure of generative AI use, but personally sees this as unnecessary. He noted that courts already require attorneys to swear to the accuracy of the information they provide, under Federal Rules of Civil Procedure Rule 11 or similar policies.

Lawyers also need to be careful about entering sensitive client information into generative AI tools, because the tools may not be designed to keep those details private, Schlegel said.

Still, Schlegel believes ChatGPT can help seasoned attorneys, in particular. Such attorneys have developed a sharp ability to review documents for errors. For them, he said, generative AI essentially is a much more sophisticated cut-and-paste.

But new lawyers may suffer from using it, Schlegel said. They havent yet developed the experience to catch potential issues, and relying on the tool could get in the way of their ever learning the nuances of the law.

Generative AI pulls information from Twitter, Reddit and other sources that may not lend themselves to accurate legal answers. Specialized generative AI trained on legal texts, however, could do better, Shenefiel said, speaking generally and not pointing to any specific AI.

With this in mind, some companies are striving to create AI tools expressly for the legal sector.

These include LexisNexis Lexis+ AI; AI startup Harveys Harvey; and Casetexts CoCounsel, all of which debuted this year. The tools are designed to summarize legal documents and search for legal information, and they are trained to draw on databases of legal information.

Harvey, for example, is based on GPT-4 but limited to drawing from a specified data set, rather than the open Internet, per Politico. Such measures aim to reduce mistakes. Still, the need for carefulness remains.

David Wakeling was leading law firm Allen & Overys rollout of Harvey when he spoke to Politico. He said the A&O operates on the assumption that it [Harvey] hallucinates and has errors, and compared the tool to a very confident, extremely articulate 12-year-old who doesnt know what it doesnt know.

Generative AI could also affect courtroom evidence. The technology currently can create images and audio difficult to distinguish from the real thing, and in the future, the same will likely become true for video, Shenefiel said.

This falsified media could then be presented as evidence, with courts struggling to detect the deception.

I can imagine an allegation of threatening phone calls with a cloned voice, Schlegel said. I can imagine a personal injury case where somebody deepfakes a video.

Texas Generative AI: Overview for the Courts also raises the concern that tools could be used to make false but convincing judicial opinions, orders or decrees.

Shenefiel said people should be required to disclose if theyve used generative AI in items submitted as evidence but noted there are currently very few ways to detect if evidence was altered or fully created with such tools.

One potential mitigation could be to attach digital signatures or watermarks to content created by AI. Recently, seven AI companies pledged to develop mechanisms for indicating when audio or visuals were created by AI, per a White House announcement.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI made these voluntary commitments, and it remains to be seen if they follow through. Digital watermarking would also need to be ubiquitous to be fully effective.

This is the second of a two-part series. Click here to read Part One.

Continued here:

What Does Generative AI Mean for the Justice System? (Part 2) - Government Technology

Related Posts

Comments are closed.