Artificial Intelligence: Recent Congressional Activity and a Look to … – JD Supra

In the past few months, the American public has become increasingly fixated on artificial intelligence (AI), especially generative artificial intelligence (GenAI), because of the economic and social considerations associated with this developing technology.

AI has inspired contemplation of its potential benefits in the fight against cancer, has become one of the issues in the Hollywood writers and actors strikes, has led a group of tech executives to warn that humans could face extinction from AI, and has led many people to crack jokes, perhaps a bit nervously, about the robots taking over.

While many people in business, medicine, and the arts (to name a few) are contemplating how to harness its capabilities, there is increasing interest among Members of Congress to determine whether and how the federal government can and should regulate AI, especially GenAI. One House Member told us that in the past couple of months, interest in AI at the Member level has gone from zero to 60. Reflecting the concerns that some policymakers share about GenAI, in one recent Senate hearing, Subcommittee Chairman Sen. Richard Blumenthal (D-CT) made the case for regulation by creating his own deepfake using AI and an AI-generated voice (lifted from his speeches) to deliver an AI-generated opening statement that he developed by asking ChatGPT to draft remarks he would make at the beginning of a hearing on AI. Meanwhile, the House of Representatives took the step of laying out guidelines for use of ChatGPT by Members and their staffs for research and evaluation only at this time.

Given the widespread policy implications of AI, we can expect continued Congressional activity in this area. This alert provides an overview of what the current Congress is doing to educate itself and legislate on topics associated with AI.

For the purposes of this alert, we are using the term GenAI to mean the kind of AI that can create new content, like text, images, and video, by learning from pre-existing and publicly available data sources. As our colleagues noted in a June 7 alert on GenAI and legal considerations for the trade association and nonprofit industry, popular examples of GenAI include Open AIs ChatGPT, Github Copilot, DALL-E, HarmonAI, and Runway, which can generate computer code, images, songs, and videos, respectively, with limited human involvement.

The environment for Congressional action on AI is hazy at the moment. While there is great interest in the issue, many of the major players in Congress are trying to address very different problems that AI and GenAI will impact in the coming years. Because the universe of issues is so vast, each Member of Congress seems to have his or her own pet priority in this area. For example, on July 13, 50 Democratic Members wrote the Federal Election Commission (FEC) to express concern about the impact of AI-generated campaign advertisements, particularly those that are fraudulent in nature, and have requested that the FEC begin setting up a framework to regulate AI political ads.

AI has inspired contemplation of the potential benefits of AI in the fight against cancer, has become one of the factors at issue in the Hollywood writers and actors strikes, has led a group of tech executives to warn that humans could face extinction from AI, and has led many people to crack jokes, perhaps a bit nervously about the robots taking over. Companies, trade associations, and nonprofits with a stake in the AI debate and with particular insight to share should be active at this time, focusing on the Members who are most active and on the multiple committees of jurisdiction.

Dan Renberg, Government Relations Practice Co-Leader

The national security implications of AI have caught the attention of many in Congress. For example, on April 19, under the leadership of Chairman Joe Manchin (D-WV), the Senate Armed Services Subcommittee on Cybersecurity held a hearing to receive testimony from outside experts and industry leaders on the state of AI and machine learning applications to improve Department of Defense operations. Expert witnesses in Defense AI highlighted the technical challenges identifying key technologies and integrating them into the system while ensuring that the applications deployed are secure and trusted.

With enormous stakes for the United States, there is a universal appetite in Washington for regulation of AI but no consensus about AI policy, or the regulatory regime to sustain it. The proposals circulating in Congress are merely the starters gun for a debate challenging policymakers and regulators to develop expertise and adapt to rapid tech developments. Key formative decisions about regulatory design are looming that will permanently impact on Americas AI position globally.

Congressman Phil English, Senior Government Relations Advisor

Others are focused on the impact on consumers and disenfranchised populations. Sen. Jon Ossoff (D-GA) has focused his efforts on protecting human rights and ensuring that peoples civil rights are not violated as AI scrapes the web (read our recent Privacy Counsel blog post on increasing lawsuits involving data scraping and GenAI tools). Sen. Chris Coons (D-DE) is focused on the impact of AI on patents, trademarks, and the creative economy. At a June 7 hearing, his Senate Subcommittee on Intellectual Property considered questions such as whether, and how, to compensate artists if GenAI creates a song that sounds like Taylor Swifts music, but is not a sample or carbon copy. At a recent hearing on AI in the same Subcommittee, Sen. Thom Tillis (R-NC) stated that the creative community is experiencing immediate and acute challenges due to the impact of generative AI. Others like Sens. Dick Durbin (D-IL) and Lindsey Graham (R-SC) have focused on the need to protect children from adults who create AI-generated child sexual abuse materials by instructing platforms to create child pornography that uses real faces and AI bodies.

Congressman Jay Obernolte (R-CA) has begun to attract attention as a leading expert on AI because of his professional and educational background, which includes an advanced degree in computer science and a former career as a computer programmer. In addition to being Vice Chair of the Congressional Artificial Intelligence Caucus, Rep. Obernolte recently authored an op-ed column in The Hill in which he provided an overview of multiple policy implications of GenAI, called for industry and government guardrails to prevent misuse of this promising technology, and noted the need to align our nations education system with the changes that AI will bring over time.

Chinas advancement in AI research and technologies has also been a major focal point of discussion in Congress, especially during AI-related hearings. At a June 22 hearing of the House Science, Space, and Technology Committee, Chairman Frank Lucas (R-OK) stated: While the United States currently is the global leader in AI research, development, and technology, our adversaries are catching up. The Chinese Communist Party is implementing AI industrial policy at a national scale, investing billions through state-financed investment funds, designating national AI champions, and providing preferential tax treatment to grow AI startups. We cannot and should not try to copy Chinas playbook. But we can maintain our leadership role in AI, and we can ensure its developed with our values of trustworthiness, fairness, and transparency. To do so, Congress needs to make strategic investments, build our workforce, and establish proper safeguards without overregulation.

We rely on AI every day. It is navigation for our cars, Siri on our iPhone, robotic vacuum cleaners and so much more. But the advance of AI to develop machines that think, reason, and possess intelligence requires us to understand how we prevent building machines with the capability that would threaten human life. Congress and the Administration are beginning to recognize that there are many policy questions that relate to AI, including Generative Artificial Intelligence (GenAI) and Artificial Super Intelligence (ASI). Time is short for us to decide how to regulate AI.

Senator Byron Dorgan, Senior Policy Advisor

There are also big philosophical questions about how and where the government should insert itself in the process of regulating and fostering AI development. Europe has created an AI sandbox, where developers can test out their AI products in a safe environment that allows academics to study the harms, impacts, and other implications. In the US, observers have thus far landed in two camps: (1) advocates for creating a new federal agency to regulate AI; or (2) those who prefer to let the private sector innovate and do what scaled the technology to this point. These viewpoints cross party lines and political ideologies at various intersections. Some free-market Republicans have said that the government can use Section 230 of the Communications Decency Act, which has traditionally been used to manage online speech and moderate social media content, to regulate AI. This set of small government Republicans also thinks that there is no need to create a new agency because Section 230 should suffice. On the left, some policymakers are pushing for a new federal agency to collect data on AI and study this issue in detail. One example is the bill introduced in May by Sens. Michael Bennet (D-CO) and Peter Welch (D-VT) which would establish a Federal Digital Platform Commission that would, among other things, regulate GenAI. This is also the stance of the Biden Administration, which has requested from Congress $2.6 billion for the National Artificial Intelligence Research Resource (NAIRR) Task Force. The Biden Administration also released an AI Bill of Human Rights last fall, which landed with a thud in Washington among the major players.

At the moment, given the novelty of GenAI and the lack of deep technological understanding among some Members of Congress, there is some confusion about the nature of GenAI and the diverse issues it can create. It is a positive development that on the Senate side, to help bring everyone up to speed, Majority Leader Chuck Schumer (D-NY), Sen. Todd Young (R-IN), and others are holding three bipartisan briefings for the entire Senate that will feature academics, major industry players, and government officials. Leader Schumer also laid out a framework on June 21 that explained what he intends for the Senate to focus on regarding AI in the coming months. This follows on the heels of an educational session on AI that Speaker Kevin McCarthy (R-CA) held for Members of the House of Representatives earlier this year and private briefings that other groups of House Members have planned for themselves.

It is worth noting that the European Union has been actively working on a regulatory framework for AI, with the European Parliament approving a massive EU AI Act in mid-June that aims to protect the general public from abuses that could arise through the use of AI. Reactions from US policymakers were mixed, with Sen. Michael Bennet (D-CO) commenting, The United States should be the standard-setter. We need to lead that debate globally, and I think were behind where the EU is, while Sen. Mike Rounds (R-SD) indicated that he was not as concerned about falling behind the EU on the regulatory front and was more concerned about continuing to facilitate US dominance in developing new innovations like GenAI.

The nature of AI is such that it will take time for Members of Congress to gain a comfort level with its true potential and what, if any, guardrails are needed. As they increase their familiarity and consult with industry and other stakeholders, it is possible that a consensus will occur and some initial regulatory steps will take place beyond merely introducing bills or holding hearings. As AI dominates public discourse, we can expect a ramping-up of legislative activity. Constituents expressing views positive or negative about GenAI when Members are home in their states could also impact the timeline.

The legal and policy framework for regulating AI is going to be a front burner issue for Congress and the Administration for some time to come. It is incumbent upon stakeholders with interest in this issue to develop policy principles and recommendations and to convey them to the Hill and relevant agencies.

Senator Doug Jones, Counsel

It is worth noting that according to a study by OpenSecrets, which tracks money in politics, 123 companies, universities, and trade associations spent a collective $94 million lobbying the federal government on issues involving AI in the first quarter of 2023. Accordingly, companies, trade associations, and nonprofits with a stake in the AI debate and with particular insight to share should be active at this time, focusing on the Members who are most engaged with the issues and on the multiple committees of jurisdiction.

[View source.]

Read more from the original source:
Artificial Intelligence: Recent Congressional Activity and a Look to ... - JD Supra

Related Posts

Comments are closed.