AI can be a force for good or ill in society, so everyone must shape it, not just the tech guys – The Guardian

Living with AI

Although designers do have a lot of power, AI is just a tool conceived to benefit us. Communities must make sure that happens

Fri 11 Aug 2023 03.00 EDT

Superpower. Catastrophic. Revolutionary. Irresponsible. Efficiency-creating. Dangerous. These terms have been used to describe artificial intelligence over the past several months. The release of ChatGPT to the general public thrusts AI into the limelight, and many are left wondering: how it is different from other technologies, and what will happen when the way we do business and live our lives changes entirely?

First, it is important to recognise that AI is just that: a technology. As Amy Sample Ward and I point out in our book, The Tech That Comes Next, technology is a tool created by humans, and therefore subject to human beliefs and constraints. AI has often been depicted as a completely self-sufficient, self-teaching technology; however, in reality, it is subject to the rules built into its design. For instance, when I ask ChatGPT, What country has the best jollof rice?, it responds: As an AI language model, I dont have personal opinions, but I can provide information. Ultimately, the question of which country has the best jollof rice is subjective and depends on personal preference. Different people may have different opinions based on their cultural background, taste preferences, or experiences.

This reflects an explicit design choice by the AI programmers to prevent this AI program providing specific answers to matters of cultural opinion. Users of ChatGPT may ask the model questions of opinion about topics more controversial than a rice dish, but because of this design choice, they will receive a similar response. Over recent months, ChatGPT has modified its code to react to accusations and examples of sexism and racism in the products responses. We should hold developers to a high standard and expect checks and balances in AI tools; we should also demand that the process to set these boundaries is inclusive and involve some degree of transparency.

Whereas designers have a great deal of power in determining how AI tools work, industry leaders, government agencies and nonprofit organisations can exercise their power to choose when and how to apply AI systems. Generative AI may impress us with its ability to produce headshots, plan vacation agendas, create work presentations, and even write new code, but that does not mean it can solve every problem. Despite the technological hype, those deciding how to use AI should first ask the affected community members: What are your needs? and What are your dreams?. The answers to these questions should drive constraints for developers to implement, and should drive the decision about whether and how to use AI.

In early 2023, Koko, a mental health app, tested GPT-3 to counsel 4,000 people but shut the test down because it felt kind of sterile. It quickly became apparent that the affected community did not want an AI program instead of a trained human therapist. Although the conversation about AI may be pervasive, its use is not and does not have to be. The consequences of rushing to rely solely on AI systems to provide access to medical services, prioritisation for housing, or recruiting and hiring tools for companies can be tragic; systems can exclude and cause harm at scale. Those considering how to use it must recognise that the decision to not use AI is just as powerful as the decision to use AI.

Underlying all these issues are fundamental questions about the quality of datasets powering AI and access to the technology. At its core, AI works by performing mathematical operations on existing data to provide predictions or generate new content. If the data is biased, not representative, or lacks specific languages, then the chatbot responses, the activity recommendations and the images generated from our prompts may have the same biases embedded.

To counter this, the work of researchers and advocates at the intersection of technology, society, race and gender questions should inform our approaches to building responsible technology tools. Safiya Noble has examined the biased search results that appeared when professional hairstyles and unprofessional hairstyles for work were searched in Google. The former term yielded images of white women; the latter search, images of Black women with natural hairstyles. Increased awareness and advocacy based on the research eventually pushed Google to update its system.

There has also been work to influence AI systems before they are deemed complete and deployed into the world. A team of Carnegie Mellon University and University of Pittsburgh researchers used AI lifecycle comic boarding, or translating the AI reports and tools into easy-to-understand descriptions and images, to engage frontline workers and unhoused individuals in discussions about an AI-based decision support system for homeless services in their area. They were able to absorb how the system worked and provide concrete feedback to the developers. The lesson to be learned is that AI is used by humans, and therefore an approach that combines the technology with societal context is needed to shape it.

Where do we as a society go from here? Whose role is it to balance the design of AI tools with the decision about when to use AI systems, and the need to mitigate harms that AI can inflict? Everyone has a role to play. As previously discussed, technologists and organisational leaders have clear responsibilities in the design and deployment of AI systems. Policymakers have the ability to set guidelines for the development and use of AI not to restrict innovation, but rather to direct it in ways that minimise harm to individuals. Funders and investors can support AI systems that centre humans and encourage timelines that allow for community input and community analysis. All these roles must work together to create more equitable AI systems.

The cross-sector, interdisciplinary approach can yield better outcomes, and there are many promising examples today. Farmer.chat uses Gooey.AI to enable farmers in India, Ethiopia and Kenya to access agricultural knowledge in local languages on WhatsApp. The African Center for Economic Transformation is in the process of developing a multi-country, multi-year programme to undertake regulatory sandbox, or trial, exercises on AI in economic policymaking. Researchers are investigating how to use AI to revitalise Indigenous languages. One such project is working with the Cheyenne language in the western United States.

These examples demonstrate how AI can be used to benefit society in equitable ways. History has proved that inequitable effects of technology compound over time; these disparate effects are not something for the tech guys to fix on their own. Instead, we can collectively improve the quality of AI systems developed and used on and about our lives.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Follow this link:

AI can be a force for good or ill in society, so everyone must shape it, not just the tech guys - The Guardian

Related Posts

Comments are closed.