Technology has brought sweeping changes into our lives and enabled many advances across society. Yet too often, breakthroughs in computer science have unintended social consequences that are not easily undone. What if universities trained students to consider social outcomes from the outset? Embedded Ethics initiatives at Stanford and other institutions seek to do just that, by integrating principles of ethical analysis throughout their undergraduate computing courses.
Earlier this month, the McCoy Family Center for Ethics and Society, the Computer Science Department, and Stanford HAI hosted a one-day Embedded Ethics Conference on the topic of teaching responsible computer science. Attendees came from schools across the U.S. and several other countries to exchange ideas about how to design, support, and implement new programs. The conference agenda featured a welcome by Jennifer Widom, dean of the School of Engineering at Stanford, keynotes from several leading scholars in the field, as well as lively panel discussions ranging from getting administrative buy-in to specific implementation strategies. A series of lightning talks included demos of a few programs that are in place at schools today. By all accounts, the conference was an inspiring event that brought thoughtful researchers together and surfaced a few promising new ideas.
Ethics cannot be just a class on the side. It should be inescapable for students who are studying computer science, said Mehran Sahami, the James and Ellenor Chesebrough Professor in the Computer Science Department at Stanford and a co-organizer of the event. We need to give students meaningful ways to grapple with these issues, so they become mindful of the impact of the work they do.
During and after the conference, attendees expressed appreciation for the opportunity to meet so many like-minded scholars, and they suggested the event served as a catalyst for taking action at their own schools.
Barbara Grosz, Higgins Research Professor of Natural Sciences in Harvards John A. Paulson School of Engineering and a Stanford HAI Distinguished Fellow, kicked off the day with a presentation on the origins, evolutions, and lessons of the Embedded EthiCS program that she co-founded at Harvard. Siri and Watson drove me to develop an AI course that integrated ethics throughout its syllabus, she recalled. I saw that our students were taught to write efficient code, but they were not taught to think about ethics. At the time, I was focused on teaching a new seminar course, not a larger change.
Grosz had some 60 students apply for 20 spots the first time she taught the course Intelligent Systems: Design and Ethical Challenges, and more than 140 applied the second year. At the end of the semester, students said they wanted CS to offer more courses that integrated ethics.
So she and Alison Simmons, the Samuel H. Wolcott Professor of Philosophy at Harvard, launched the Embedded EthiCS program in early 2017 with four courses and one graduate fellow. By spring 2023, it had evolved to reach 9,500 students through 47 courses with both graduate students and postdocs contributing and philosophers and computer scientists meeting in a weekly teaching lab to coordinate the development of new modules.
Grosz explained, the benefits to the graduate student and postdoc fellows in the teaching lab for Harvards Embedded EthiCS program have ranged from students adapting their research or shaping entirely new research projects to fellows finding different kinds of job opportunities when they enter the workforce. And its a win for faculty, who gain confidence in their understanding and ability to discuss ethics in their teaching of computing. It was heartwarming to see so many kindred spirits together at the conference. No one school can develop a subprogram on its own. We need to help each other, Grosz said.
Mariano-Florentino Cullar, president of the Carnegie Endowment for International Peace, delivered a thought-provoking talk about the evolution of debates about ethics and technology over the last 20 years, from privacy and security issues in the early days of the internet to disagreements about facial recognition to questions about todays generative AI models. Cullar is also a former justice of the Supreme Court of California, a visiting scholar at Stanford Law School, and a member of the Stanford HAI Advisory Council. He has had a long-standing interest in the intersection of ethics, policy, computing, and data.
In the beginning, I was focused on getting people to care a lot. We were seeing the staggering change in human welfare due to technology, but theres also been a darker side to the progress, he said. When the conversation shifted a few years later to deep learning, big data, and what the technology meant for surveillance and privacy, Cullar saw dilemmas coming in the legal system around who would be liable for AI systems and their performance. Now were in the era of generative AI, and were all part of an A/B testing cycle. The technology is evolving in real time, and we are all subjects. Its hard to step back and ask what is working well and how would we want this to be done in an ideal world.
Cullar challenged the audience to put aside the writing of principles and focus on specific scenarios, such as how to handle medical records, resolve diplomatic disputes involving technology, or identify pathways to catastrophic risk. He urged the audience to be honest about recognizing the trade-offs that must come with every decision and to avoid intellectual shortcuts. We have an enormous moment of opportunity with the progress of technology and the current spike of interest among young people, he said.
One of the most pivotal talks of the day put the spotlight on the need for incorporating cultural competence into embedded ethics initiatives. Issues of diversity, equity, and inclusion have long been overlooked in computing disciplines; yet they significantly affect the cultures of university departments and tech organizations, as well as the retention of minoritized students, faculty, and staff. To shed light on this topic, Nicki Washington, Professor of the Practice of Computer Science and Gender, Sexuality, & Feminist Studies at Duke University, spoke about her research and experience teaching identity and cultural competence in computing.
Universities need to take a yes, and approach of embedding ethics and cultural competence, instead of saying, yes, but not now, not here, or not me, Washington said. In 2020, she created the Race, Gender, Class, & Computing course as a space for students to have conversations about identity, including how it impacts and is impacted by computing, and to develop an understanding of why these issues matter. The course begins with an exploration of identity (i.e., race, ethnicity, gender, sexuality, class, and ability), forms of oppression against these identities, social justice movements to eliminate these oppressions, and policies enacted to exclude/include identities. After students have spent time reflecting on identity, the class starts looking at specific technologies facial recognition, surveillance, fintech, voice recognition, health care algorithms including who is considered the default user and their impact on people from different minoritized groups.
The elective course started with 20 students in fall 2020, and a wait list began almost immediately. Washington said she has taught the course six times to date and increased the class size to 100 to accommodate overwhelming student interest. No two semesters have looked the same, she explained. Each student and each class builds on whats happening in the world at the time.
To scale these efforts beyond the campus at Duke, Washington leads the Alliance for Identity Inclusive Computing Education, which is focused on broadening participation in computing across K-16. She also launched the Cultural Competence in Computing (3C) Fellows Program, a two-year professional development program now accepting applications for its fourth cohort. People in CS are finally starting to listen to social scientists and understand the impact their work has on society. Technology is not neutral, she said.
Speakers and panelists agreed on several guidelines for launching successful embedded ethics and social responsibility programs:
Stanford, Harvard, and other schools have set up repositories of information for others to access and deploy. Stanfords Embedded Ethics team recently launched a new website, Embedding Ethics in Computer Science, with curricular resources for undergraduate CS courses. In addition, the Responsible Computing Challenge offers a playbook with teaching advice, and the Association for Computing Machinery has established the ACM Code of Ethics and Professional Conduct, with case studies available on its website.
Its early days for most embedded ethics programs, but those who attended this gathering said they were encouraged to hear from others in the field and to share their visions for this work. Beyond the one day of meetings, I think weve created a community of practice, Sahami said. Its not only about the ideas and best practices, but the invaluable connections to other people.
Miss the conference? Watch the recording and see a list of resources to assist in designing and implementing embedded ethics programs.
Stanford HAIs mission is to advance AI research, education, policy and practice to improve the human condition.Learn more.
View post:
Teaching Responsible Computer Science - Stanford HAI
Read More..