The Challenges of Regulating AI and the Role of Behavioral Science – by Heather Graci – Behavioral Scientist

Laws move slow, technology moves fast. Thats one challenge when regulating AIsometimes the damage is already done by the time lawmakers pass legislation. Like the European ban on using images scraped from the web to build facial recognition databases, a ban that was approved years after a company had already collected billions of pictures of our unsuspecting, unconsenting faces to do exactly that.

The stakes also vary wildly depending on the context where AI is deployed. For instance, how do you regulate a dual-use technology that some might use to sort handbags on an e-commerce site, others to direct military drones in a warzone? And regulation requires anticipating and countering risksbut its difficult to anticipate risk when we cant possibly predict every novel situation an AI might encounter, nor how it will behave when it does. During a grisly incident in San Francisco, a self-driving car struck a pedestrian and proceeded to pull overa reasonable response to a collision, but not when the victim is still in harms way.

The versatile, unpredictable, and rapidly evolving nature of AI presents a challenge for regulators tasked with keeping us safe as the technology becomes both more sophisticated and more entrenched in our day-to-day lives.

This is not a problem with just the machine. Its a problem with how the machine interacts with us.

Earlier this month, the 2024 Behavioral Science & Policy Association convened a panel at their annual conference to discuss the role behavioral science can play in regulating AI. Ronnie Chatterji, professor of business and public policy at Duke University, moderated the conversation, which featured perspectives from the worlds of business, academia, and government.

The panel included Paula Goldman, ethical and humane use officer at Salesforce; Kristian Hammond, professor of computer science at Northwestern University and chief scientist at Narrative Science; and Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute.

The core question that Goldman, Hammond, and Kelly are grappling with in their respective domains is how to mitigate harm without hampering innovation. All three agree that behavioral science is essential to these efforts. This is not a problem with just the machine, Hammond said. Its a problem with how the machine interacts with us.

Weve curated a portion of their discussion. You can watch the full discussion here.

The transcript has been edited for clarity and brevity.

What about safety and AI keeps you up at night?

Paula Goldman, Salesforce: I spend a lot of time thinking about how to apply safety to the world of AI agents. In a world where were moving from generative AIgoing from generating content to taking action on our behalf, to these interfaces where you can ask a million different things and a million different actions result from ithow do you know in advance, when you ask for an action, what its going to look like? If one prompt is going to launch a million emails, for example, how do you in advance check the quality of that? And then ex-post check the quality of that?

Kristian Hammond, Northwestern: I worry that were going to end up trying to solve the wrong problems. There are some really flashy AI fears, but the thing I worry about is that if we ignore the genuine reality of how this impacts individuals and groups in society, well end up with, Oh, we have regulations around transparency, We have regulations around explanation, We have a focus on being responsible, but without actually getting into the concrete places where there are genuine harms. The fact is that there is a rise in depression among young women ages 13 to 23. Theres a rise in online addiction. We allow the production of false pornography thats humiliating women across the country, and were like, Well, we dont know what to do.

Lets focus on the places where there are real harms because they are rampant. The thing I genuinely worry about is that well focus on, you know, evil drones blowing people up instead of the fact that we are creating a nation of people who are being humiliated, addicted, and pushed into depression, and were not doing anything about it.

Elizabeth Kelly, U.S. AI Safety Institute: I totally agree with what Kris said, but I push back a little on the not doing anything about it, because federal agencies are pretty hard at work trying to make sure theyre addressing a lot of these harms. And theres honestly more that Congress needs to do, and we were very clear about that.

This is light speed for government, but its still slower than the technology.

The thing that keeps me up at night is just how quickly this is moving. If the technology is evolving exponentially, we have no idea what 2025 or 2026 will be. Its hard to say these are the harms we should anticipate. And I think its even harder to say that we as policymakers, we as government, will be able to stay on top of it.

Weve seen the global community move pretty quickly for government. [The executive order] came together in a couple of months, the G7 Code of Conduct. This is light speed for government, but its still slower than the technology. And for all the reasons that weve talked about, weve got to stay ahead of it.

What else is on your mind?

Paula Goldman, Salesforce: I spend a lot of time in this AI bubble. When I step out of it and talk to someone, like a friend I havent talked to in a long time, I hear a lot of fear, and honestly, a lot of mysticism about AI. I think its incumbent on all of us to break that down and to give people a mental model for how to interact with AI. How do we build that into these systems? Accounting for not only the strengths and weaknesses of where AI is right now and where its going, but also human strengths and human cognitive biases. And thats, I think, where the magic is. Thats where we unlock not only avoiding harm with AI but actually using AI for good.

Kristian Hammond, Northwestern: We have to embrace the notion that this is sociotechnical, that this is not a problem with just the machine. Its a problem with how the machine interacts with us. And that means we have to understand and admit who we are and how were hurt, and realize that youre not going to solve the problem by telling people to act differently. Youre going to solve the problem by making sure the machine is built so that when people do what people do, they dont hurt themselves.

Elizabeth Kelly, U.S. AI Safety Institute: Agreed, and thats why my leadership team includes both an anthropologist and an ethicist. For me, the question is: How do we shift away from AI that is easily monetized, that produces a lot of the harms that Kris has talked about, to AI that is actually able to tackle a lot of our most pressing societal problems. Drug discovery and development, carbon capture and storage, education. How can we together work to shift the narrative?

Disclosure: BSPA is an organizational partner of Behavioral Scientist. Organizational partners do not play a role in the editorial decisions of the magazine.

Visit link:

The Challenges of Regulating AI and the Role of Behavioral Science - by Heather Graci - Behavioral Scientist

Related Posts

Comments are closed.