How AI can help combat systemic racism

March 17, 2022

S. Craig Watkins looks beyond algorithm bias to an AI future where models more effectively deal with systemic inequality.

By Scott Murray

In 2020, Detroit police arrested a Black man for shoplifting almost $4,000 worth of watches from an upscale boutique. He was handcuffed in front of his family and spent a night in lockup. After some questioning, however, it became clear that they had the wrong man. So why did they arrest him in the first place?

The reason: a facial recognition algorithm had matched the photo on his driver’s license to grainy security camera footage.

Facial recognition algorithms — which have repeatedly been demonstrated to be less accurate for people with darker skin — are just one example of how racial bias gets replicated within and perpetuated by emerging technologies.

“There’s an urgency as AI is used to make really high-stakes decisions,” says MLK Visiting Professor S. Craig Watkins, whose home department for his time at MIT is the Institute for Data, Systems, and Society (IDSS). “The stakes are higher because new systems can replicate historical biases at scale.”

Watkins, a professor at the University of Texas at Austin and the Founding Director of the Institute for Media Innovation, researches the impacts of media and data-based systems on human behavior, with a specific concentration on issues related to systemic racism. “One of the fundamental questions of the work is: how do we build AI models that deal with systemic inequality more effectively?”

Ethical AI

Inequality is perpetuated by technology in many ways across many sectors. One broad domain is healthcare, where Watkins says inequity shows up in both quality of and access to care. The demand for mental health care, for example, far outstrips the capacity for services in the US. That demand has been exacerbated by the pandemic, and access to care is harder for communities of color.

For Watkins, taking the bias out of the algorithm is just one component of building more ethical AI. He works also to develop tools and platforms that can address inequality outside of tech head on. In the case of mental health access, this entails developing a tool to help mental health providers deliver care more efficiently.

“We are building a real-time data collection platform that looks at activities and behaviors and tries to identify patterns and contexts in which certain mental states emerge,” says Watkins. “The goal is to provide data-informed insights to care providers in order to deliver higher impact services.”

Watkins is no stranger to the privacy concerns such an app would raise. He takes a user-centered approach to the development that is grounded in data ethics. “Data rights are a significant component,” he argues. “You have to give the user complete control over how their data is shared and used and what data a care provider sees. No one else has access.”

Combatting Systemic Racism

Here at MIT, Watkins has joined the newly launched Research Initiative on Combatting Systemic Racism (ICSR), an IDSS research collaboration that brings together faculty and researchers from the Schwarzman College of Computing  and beyond. The aim of the ICSR is to develop and harness computational tools that can help effect structural and normative change towards racial equity.

The ICSR collaboration has separate project teams researching systemic racism in different sectors of society, including healthcare. Each of these ‘verticals’ addresses different but interconnected issues, from sustainability to employment to gaming. Watkins is a part of two ICSR groups, policing and housing, which aim to better understand the processes that lead to discriminatory practices in both sectors. “Discrimination in housing contributes significantly to the racial wealth gap in the U.S.,” says Watkins.

The policing team examines patterns in how different populations get policed. “There is obviously a significant and charged history to policing and race in America,” says Watkins. “This is an attempt to understand, to identify patterns, and note regional differences.”

Watkins and the policing team are building models using data that details police interventions, responses, and race, among other variables. The ICSR is a good fit for this kind of research, says Watkins, who notes the interdisciplinary focus of both IDSS and the college.

“Systemic change requires a collaborative model and different expertise,” says Watkins. “We are trying to maximize influence and potential on the computational side, but we won’t get there with computation alone.”

Opportunities for change

Models can also predict outcomes, but Watkins is careful to point out that no algorithm alone will solve racial challenges.

“Models in my view can inform policy and strategy that we as humans have to create. Computational models can inform and generate knowledge, but that doesn’t equate with change.” It takes additional work — and additional expertise in policy and advocacy — to use knowledge and insights to strive towards progress.

One important lever of change, he argues, will be building a more AI literate society through access to information and opportunities to understand AI and its impact in a more dynamic way. He hopes to see greater data rights and greater understanding of how societal systems impact our lives.

“I was inspired by the response of younger people to the murders of George Floyd and Breonna Taylor,” he says. “Their tragic deaths shine a bright light on the real world implications of structural racism and has forced the broader society to pay more attention to this issue, which creates more opportunities for change.”

Watch: Artificial Intelligence and the Future of Racial Justice


MIT Institute for Data, Systems, and Society
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764