Seed Funds

The IDSS/SSRC Combatting Systemic Racism Seed Fund Program supports innovative, early-stage cross-disciplinary research projects with a focus on combatting systemic racism. Through these grants, IDSS and SSRC seek to encourage researchers from across MIT to collaborate in bringing together new ideas from information and decision systems; data sciences and statistics; and the social sciences to identify and overcome racially discriminatory processes and outcomes across a range of U.S. institutions and policy domains.

Proposals addressing a broad range of systemic racism challenges are eligible, including but not limited to housing, healthcare, policing and education.

For AY 2023-2024, we will support the following projects:

Race, Wealth, and Entrepreneurship

PIs: Daniel August, Peko Hosoi, Sarah Williams

Entrepreneurship is often touted as an important lever in closing the racial wealth gap. However, profitable entrepreneurship has consistently remained out of reach for many minority groups in America. This project seeks to develop a theoretical framework to understand and address the structural systems perpetuating the racial wealth gap through an entrepreneurial lens. In addition, the group will develop a digital tool for policymakers and activists alike to understand historical policies and aid in designing new economic and social policies.

Artificial Paranoia: Human, Algorithmic, and Systemic Biases in Amazon Ring’s Surveillance Network

PIs: Ashia Wilson, Sandy Pentland

Amazon Ring Doorbell owners can anonymously share recorded videos of their home entryway on social platforms, creating a new medium for community policing. Through Amazon’s partnerships with law-enforcement agencies, they also make these videos implicitly accessible to over 2000 police departments. This project aims to understand these human biases, as well as the algorithmic biases from the use of computer vision models for surveillance, in the context of systemic racism in policing. The researchers hope to contribute to ICSR’s development of analytical methods to understand systemic biases, and to evaluate de-biasing methods and whether “responsible AI” can even exist within such systems of oppression. This project will build on the ongoing works in the IDSS-ICSR policing vertical, in addition to complementing other research from the Media Lab and CSAIL.

AY 2022-2023 Seed Fund Recipients:

An Ethics, Equity, and Justice Audit and Reimagination of Engineering Education 

PIs: Catherine D’Ignazio, Cynthia Breazeal, Maria Yang 

Contrary to the common belief that engineering systems are objective in their design and functions, occasionally their design is such that it perpetuates systemic racism. With the current way of teaching engineering design, students are under-equipped to consider the societal impacts of their work, and often do not recognize these forms of ingrained racism. Through a series of surveys and analyses, this project aims to dismantle the racist and colonial values historically incorporated and currently perpetuated in technological design, and to reevaluate engineering teaching in regards to the societal impact. 

EVDT Integrated Modeling Framework Applied to Measure Environmental Injustice and Socioeconomic Disparities in Prison Landscapes 

PIs: Danielle Wood, Justin Steil, Dara Entekhabi

Various studies from recent years have drawn attention to the inadequate environmental conditions of prisons, as they are often exposed to environmental hazards such as air pollution, poor water quality, proximity to hazardous waste facilities, etc. However, this area of research lacks empirical evidence. This project will use geospatial satellite data analysis and, focusing specifically on flood risk, extreme temperatures and air pollution, will be the first comprehensive large-scale study of the risks of these prison environments. 

Detecting and Mitigating Multi-Modal medical Misinformation 

PIs: Marzyeh Ghassemi, David Rand 

Misinformation is becoming prevalent and easily spread on social media. This is especially dangerous when it comes to medical misinformation, as was the case with information about Covid-19 vaccines in recent years. In order to combat this, many social media companies have created AI to detect this misinformation; however, these focus mainly on text-based posts, while memes–or images with embedded text which depend heavily on context– pose a challenge for AIs to identify. This study seeks to better detect memes and holistically interpret their meaning, with the objective to curb the spread of medical misinformation on social media platforms. 

Tradeoffs in hotspot predictive policing 

PIs: Manish Raghavan, Fotini Christia 

In recent years, predictive policing has become the prevalent method of law enforcement, focusing mainly on hotspot policing: sending large amounts of police sources to small areas which are statistically high in crime. However, this method of policing may lead to harmful conflicts of interest with the residents of these areas, which are often areas of low income and minority populations. This study aims to evaluate emerging questions regarding hotspot policing practices, and assess the possible bias of the data upon which these policing practices are carried out. 


MIT Institute for Data, Systems, and Society
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764