Working at the intersection of data science and public policy | Penn Today – Penn Today

One of the ideas you discuss in the book is algorithmic fairness. Could you explain this concept and its importance in the context of public policy analytics?

Structural inequality and racism is the foundation of American governance and planning. Race and class dictate who gets access to resources; they define where one lives, where children go to school, access to health care, upward mobility, and beyond.

If resource allocation has historically been driven by inequality, why should we assume that a fancy new algorithm will be any different? This theme is present throughout the book. Those reading for context get several in-depth anecdotes about how inequality is baked into government data. Those reading to learn the code, get new methods for opening the algorithmic black box, testing whether a solution further exasperates disparate impact across race and class.

In the end, I develop a framework called algorithmic governance, helping policymakers and community stakeholders understand how to tradeoff algorithmic utility with fairness.

From your perspective, what are the biggest challenges in integrating tools from data science with traditional planning practices?

Planning students learn a lot about policy but very little about program design and service delivery. Once a legislature passes a $50 million line item to further a policy, it is up to a government agency to develop a program that can intervene with the affected population, allocating that $50 million in $500, $1,000 or $5,000 increments.

As I show in the book, data science combined with governments vast administrative data is good at identifying at-risk populations. But doing so is meaningless unless a well-designed program is in place to deliver services. Thus, the biggest challenge is not teaching planners how to code data science but how to consider algorithms more broadly in the context of service delivery. The book provides a framework for this by comparing an algorithmic approach to service delivery to the business-as-usual approach.

Has COVID-19 changed the way that governments think about data science? If so, how?

Absolutelyspeaking of service delivery, data science can help governments allocate limited resources. The COVID-19 pandemic is marked entirely by limited resources: From testing, PPE, and vaccines to toilet paper, home exercise equipment, and blow-up pools (the latter was a serious issue for my 7-year-old this past summer).

Government failed at planning for the allocation of testing, PPE, and vaccines. We learned that it is not enough for government to invest in a vaccine; it must also plan for how to allocate vaccines equitably to populations at greatest risk. This is exactly what we teach in Penns MUSA Program, and I was disappointed at how governments at all levels failed to ensure that the limited supply of vaccine aligned with demand.

We see this supply/demand mismatch show up time and again in government, from disaster response to the provision of health and human services. I truly believe that data can unlock new value here, but, again, if government is uninterested in thinking critically about service delivery and logistics, then the data is merely a sideshow.

What do you hope people gain by reading this book?

There is no equivalent book currently on the market. If you are an aspiring social data scientist, this book will teach you how to code spatial analysis, data visualization, and machine learning in R, a statistical programming language. It will help you build solutions to address some of todays most complex problems.

If you are a policymaker looking to adopt data and algorithms into government, this book provides a framework for developing powerful algorithmic planning tools, while also ensuring that they will not disenfranchise certain protected classes and neighborhoods.

See the original post here:

Working at the intersection of data science and public policy | Penn Today - Penn Today

Related Posts

Comments are closed.