AI for more caring institutions – Harvard School of Engineering and Applied Sciences

More and more public services such as affordable housing, public school matching and child welfare are relying on algorithms to make decisions and allocate resources. So far, much of the work that has gone into designing these systems has focused on workers experiences using them or communities perceptions of them.

But what about the actual impact of these programs have on people, especially when the decisions the systems make lead to denial of services? Can you design algorithms to help people make sense of and contest decisions that significantly impact them?

Naveena Karusala, a postdoctoral fellow at the Harvard John A. Paulson School of Engineering and Applied Science (SEAS), with Krzysztof Gajos, the Gordon McKay Professor of Computer Science at SEAS and a team of researchers, are re-thinking how to design algorithms for public services.

Instead of only centering the worker or institution that is using the tool to make a decision, can we center the person who is affected by that decision in order to work towards more caring institutions and processes, asked Karusala.

In a paper being presented this week at the Association of Computing Machinerys conference on Human Factors in Computing System, Karusala and her colleagues offer recommendations to improve the design of algorithmic decision-making tools to make it easier for people impacted by those decisions to navigate all the steps in the process, especially when they are denied.

The researchers aimed to learn from areas where algorithms currently arent being used but could be deployed in the future. They looked specifically at public services for land ownership in rural South India and affordable housing in the urban Northeast United States and contestation processes after applicants are denied services.

Governments in the U.S. and India as well as around the world recognize the right to contest a denial of public services, and increasingly so when denied by an algorithm. But contestation processes can be complex, time consuming and difficult to navigate, especially for people in marginalized communities.

Intermediaries like social workers, lawyers and NGOs play an important role in helping people navigate these processes and understand their rights and options. In public health, this concept is known as accompaniment, where community-based aid workers assist people in under-resourced communities to navigate complex healthcare systems together.

One of the takeaways of our research is the clear importance of intermediaries and embedding the idea of accompaniment into the algorithm design, said Karusala. Not only should these intermediaries be involved in the design process, but they should also be made aware of how the decision-making process works because theyre the ones that bridge communities and public services.

The researchers suggest that algorithmic decision-making systems should be designed to proactively connect applicants to those intermediaries.

Today, many AI researchers are focused on improving an algorithms ability to explain its decision but that isnt useful enough to the people who have been denied service, said Karusala.

Our findings point to the fact that rather than focusing only on explanations, there should be a focus on other aspects of algorithm design that can prevent denials in the first place, said Karusala.

For example, if a background check turns up information that puts a person on the boundary between approval and disapproval for housing, algorithms need to be able to ask for additional information to either make a decision or ask a human reviewer to step in.

These are some concrete ways that the burden often placed on marginalized communities could be shared with not only intermediaries, but also public service administrators and algorithmic tools, said Karusala.

This research is particularly significant because it challenges an assumption held deeply in the computing community that the most effective way to provide people with grievance redressal mechanisms is for algorithms to provide explanations of their decisions, said Gajos. Instead, this research suggests that algorithms could be used throughout the process: from identifying individuals who may not apply on their own and may need to be encouraged to do so, to helping applicants prepare and contextualize information to make applications relevant and informative, to navigating contestation strategies.

The research was co-authored by Sohini Upadhyay, Rajesh Veeraraghavan and Gajos.

More:

AI for more caring institutions - Harvard School of Engineering and Applied Sciences

Related Posts

Comments are closed.