At tech companies, let’s share the reins | Context – Context

Large Language Models and other AI systems will only reach their full potential when a broader set of people build them

Kathy Pham is a Computer Scientist, Senior Advisor at Mozilla; Vice President of AI and Machine Learning at Workday; Adjunct Lecturer at Harvard. Opinions here are her own and not on behalf of any affiliated organization.

In 2023, we learned a lot about AI models: Whats possible, like discovering a new class of antibiotics. What's uncertain, like how exactly businesses will integrate chatbots into their operations. And how the technology might be regulated and implemented across governments, per the Blueprint for an AI Bill of Rights, the NIST Risk Management Framework, the EU AI Act, and the United States executive order on AI.

But for those within the industry or those paying close attention there was another big learning. That the people building the foundation of the most influential technology of our era are overwhelmingly from a very narrow set of disciplines: computer science, software engineering, and even more specifically, machine learning.

It makes sense for the people writing the code to be computer scientists and programmers thats our trade. But building these models is about a lot more than code. Its about identifying use cases, making design decisions, and anticipating and mitigating potential harms. In short: Its about making sure the technology isnt just functional, but also trustworthy. And this requires skill sets beyond machine learning and traditional computer science.

I know this well from my own experiences across academia, non-profits, government, and industry. The hallmark of truly impressive technology is productive applications and companies only build the best and right things when they pull together experts across disciplines.

Five years ago, I published an essay about the consequences of this bias in the industry and how we might fix it. I argued that if we want to change the tech industry to make software more fair and less harmful then we need to change what classes are required for a computer science degree.

AI has grown exponentially since then. Its tremendous computing capacity means even more unpredictable, unintended harms. My original thesis remains true: We need to expand computer science curricula to include the humanities. But those changes take time. Right now, we also need to break down the persistent engineering / non-engineering divide directly at the industry level.

Theres no shortage of stories about AI systems harms: hallucinations that generate incorrect information, disinformation, sexist and racist outputs, toxic training sets. In many cases, these are problems that engineers overlooked, but that others like cognitive scientists and media theorists confronted after the fact. The order of operations is clearly backward: These harms should be addressed before the system is deployed. Proactive, not reactive, should be the status quo, and thats only possible with more varied expertise. Its true that some companies have made progress on this front with their trust and safety teams. But those teams are still siloed and splintered.

There are some bright spots these companies can emulate: In recent years, Airbnb has centered the work of Laura Murphy, a legal and civil rights expert, to fight discrimination in product features. In 2018, Salesforce established an Office of Ethical and Humane Use a place for internal frontline employees, executives, and external experts across a broad range of functions to guide decision-making. (I was an inaugural advisory board member.) When AI really entered the zeitgeist, the office was ready for action.

It isnt a big ask for the engineers to talk with the lawyers or social scientists. A simple conversation Hey, if I create Booleans with two choices for gender in my model, is that OK? can head off bias and the need for an audit later on. Im often reminded of my late father, a truck driver. There are countless engineers working to streamline the trucking industry. But how many of them spend time speaking with actual truckers?

In addition to overt harms like bias, there are also harms of omission. A dearth of humanities experts means a missed opportunity for AI systems with clear use cases and problem-solving capabilities. Right now, many AI systems are developed and deployed first, with a goal of finding a purpose later. Consider OpenAIs 128K context window or Anthropics 200K context window: technically impressive, and lacking clear objectives until they are used for meaningful applications for society. Research and development without a clear goal isnt necessarily a bad thing, but these systems use tremendous amounts of money and energy to train. And we also know real goals exist, like better cancer detection algorithms for people with darker skin. There are powerful examples of whats possible with this approach, like the Kiazi Bora app in Tanzania. Its creators identified a problem first a lack of agricultural literacy among women in East Africa and then built an AI chatbot to help solve it.

For AI systems to be more trustworthy, tech companies must prioritize a broad range of expertise. Yes, there has been encouraging progress on this front: Initiatives like All Tech is Human are connecting engineering to the social sciences and humanities, programs like Mozillas Responsible Computing Challenge are reimagining computer science curricula as encompassing humanistic studies, and teams like Workdays Responsible AI are convening boards that spans disciplines.

Whats next? Now we need industry change to match the accelerating speed of AI development and deployment.

Read the original:

At tech companies, let's share the reins | Context - Context

Related Posts

Comments are closed.