Mehran Sahami on AI and safeguarding society – Stanford Report – Stanford University News

Image credit: Claire Scully

As engineers and computer scientists make rapid advances in machine learning and artificial intelligence, they are being compared to the physicists of the mid-20th century. Its a parallel Stanford computer scientist Mehran Sahami makes explicit in his introduction to students taking his CS 182: Ethics, Public Policy, and Technological Change course when he shows them a photo of a billowing mushroom cloud from the nuclear bomb being dropped on Nagasaki, Japan, in 1945.

In the 20th century, unleashing the power of the atom was a physical power but today we have an informational power, and its just as, if not more powerful, because information is what impacts peoples decision-making processes, said Sahami, the Tencent Chair of the Computer Science Department and the James and Ellenor Chesebrough Professor in the School of Engineering. Theres a tremendous amount of responsibility there.

For Sahami, it is crucial in 2024 that society, business leaders, and policymakers safeguard the future from the unintended consequences of AI.

When OpenAI launched ChatGPT to the public on Nov. 30, 2022, it prompted sensationalism and controversy. Anyone can now ask the large language model to perform any number of text-based tasks, and in seconds a personalized response is given.

Sahami described ChatGPT as an awakening. This was one of the first big applications where AI was put in peoples hands, and they were given an opportunity to see what it can do, he said. People were blown away by what the technology was capable of.

Sahami thinks that one of the exciting areas where generative AI could be applied is in personalized services like tutoring, coaching, and even therapy, an industry that is thinly stretched.

But AI is expensive to build and services like these can come with hefty fees, Sahami pointed out.

Of concern is whether these services will be accessible to vulnerable and hard-to-reach populations,groups that stand to benefit from them the most.

One of the places I really worry a lot about is who is getting the gains of AI, Sahami said. Are those gains being concentrated in people who were already advantaged before or can it actually level the playing field? To level the playing field requires conscious choices to allocate resources to allow that to happen. By no means will it just happen naturally by itself.

In the coming year, Sahami also expects to see AI impact the workforce, whether through labor displacement or augmentation.

Sahami points out that the labor market shift will be the result of choices made by people, not technology. AI by itself is not going to cause anything, he said. People make the decisions as to whats going to happen.

As AI evolves, what sorts of things do we put in place so we dont get big shocks to the system?

Some measures could include retraining programs or educational opportunities to show people how to use these tools in their lives and careers.

I think one of the things that will be front and center this coming year is how we think about guardrails on this technology in lots of different dimensions, Sahami said.

In 2023, President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that urged the government, private sector, academia, and civil society to consider some of those safeguards.

The White Houses executive order has shined a spotlight on the fact that the government wants to act and should act, Sahami said.

While Sahami is heartened by the order, he also has concerns.

The real question is what will happen with that in the coming year and how much will be followed up by agencies, he said.

One worry Sahami has is whether people in both government and private sectors have the right skill set to ensure the order is being carried out effectively.

Some of these issues have a lot of subtleties and you want to make sure the right expertise is in the room, Sahami said. You need people with deep technical expertise to make sure that that policy is actually well guided, he added, pointing out there is a risk that one can come up with some policy that seems well intentioned, but the details actually dont mesh with how the technology works.

Over the past few months, OpenAI made newspaper headlines again this time, reports were focused on the companys founder and Sahamis former student, Sam Altman. Over a chaotic few days, Altman was ousted from the company but quickly brought back in, along with a restructured board.

What happened to OpenAI has generated a spotlight on thinking about the fragility of some of the governance structures, Sahami said.

Debated across the media was OpenAIs unique business model. OpenAI started as a mission-driven nonprofit but later established a for-profit subsidiary to expand its work when it felt the public sector could no longer support its goals. It was reported that disagreements emerged between Altman and the board about the companys direction.

I dont think this is going to be the first or last time were going to see these tensions between what we want and what is realistic, Sahami said. I think those kinds of things will continue and those kinds of debates are healthy.

A topic of ongoing debate is whether AI should be open access, and its an issue the National Telecommunications and Information Administration will examine in 2024 as part of President Bidens executive order on AI.

Open access was also a topic that came up when Altman was a guest speaker with the class Sahami co-taught this fall with the philosopher Rob Reich and the social scientist and policy expert Jeremy Weinstein, Ethics, Technology + Public Policy for Practitioners.

Sahami asked Altman who spoke a week before the shake-up at OpenAI about some of the market pressures he experienced as CEO, as well as the pros and cons of making these models open source, a direction Altman advocated for.

A benefit of open source is a greater transparency in how a software model works. People are also able to use and expand the code, which can lead to new innovations and arguably make the field more collaborative and competitive.

However, the democratization of AI also poses a number of risks. For example, it could be used for nefarious purposes. Some fear it could aid bioterrorism and therefore needs to be kept guarded.

The question is to what works when there are guardrails in place versus the benefits you get from transparency, Sahami said.

But there are also solutions in the middle.

Models could be made available in a transparent way in escrow for researchers to evaluate, Sahami said. That way, you get some level of transparency but you dont necessarily make the whole model available to the general public.

Sahami sees the tools of democracy as a way to come to a collective decision about how to manage risks and opportunities of AI and technology: Its the best tool we have [to] take the broader opinion of the public and the different value systems that people have into that decision-making process.

Reich is the McGregor-Girand Professor of Social Ethics of Science and Technology and professor of political science in the School of Humanities and Sciences.

Weinstein is the Kleinheinz Family Professor in International Studies and professor of political science in the School of Humanities and Sciences as well as a senior fellow at the Freeman Spogli Institute for International Studies and at the Stanford Institute for Economic Policy Research.

View post:

Mehran Sahami on AI and safeguarding society - Stanford Report - Stanford University News

Related Posts

Comments are closed.