ICSR Project Teams


Health is important, and improvements in health improve lives. However, we still don’t fundamentally understand what it means to be healthy, and the same patient may receive different treatments across different hospitals or clinicians as new evidence is discovered, or individual illness is interpreted. Unlike many problems in machine learning – games like Go, self-driving cars, object recognition – disease management does not have well-defined rewards that can be used to learn rules. Models must also work to not learn biased rules or recommendations that harm minorities or minoritized populations. These projects tackle the many novel technical opportunities for machine learning in health, and work to make important progress in health and health equity.


Housing policies enacted today are not applied in a vacuum. They are implemented in an environment that has evolved through a long, complex history which includes discriminatory policies and practices put into effect by various actors, ranging from the government to banks to private citizens. These racially inequitable policies include racial covenants, redlining, predatory lending and more. Emerging research shows that AI and algorithmic systems are exacerbating rather than ameliorating these existing inequities. In our work, we take a data-driven approach to think beyond algorithmic fairness and account for historical aspects in evaluating housing equity. Our primary objective in the housing vertical is to reveal mechanisms that lead to racially disparate outcomes in housing and to identify the most impactful intervention points to disrupt these mechanisms. In particular, we are focused on three topics: evictions and housing security; home ownership and lending; and health disparities that result from residential segregation.

Antiracism, Games, and Immersive Media

Racism, and related forms of discrimination, manifests in a variety of contexts–video games and immersive environments are no exception. In the MIT Virtuality/IDSS Antiracism, Games, and Immersive Media vertical, we harness the power of games and immersive environments to study and combat racist behaviors, phenomena, and systems in video games and immersive experiences. We address racial bias and discrimination in and through virtual worlds and the virtual identities that inhabit them. We design and develop interactive experiences to support reflection, understanding, and changing of social ills such as racial bias to help individuals and institutions better recognize and positively intervene in the face of racism–not just in virtual worlds but in the physical world as well.


The central theme of the project is to understand the role of data in the design of racially (and otherwise) unbiased policies for emergency and police priority dispatch system, policing, justice system, correction facilities and beyond. Towards that, the project aims to develop publicly available comprehensive “data hub” to foster the role of data in policy design as well as develop analytic methods to evaluate biases (racial and otherwise) using the data.

Social Media

A society-wide failure of algorithmic transparency is currently perpetuating innumerable social, economic and health related risks around the world. While algorithms play a role in the proliferation of extremism online — be it white nationalism or other toxic social phenomena — we know very little about how these algorithms operate and how their operation is affecting users; researchers have no robust way to probe how or why extremism emerges and the role that algorithms play in its development. Our team is working on causal investigations of algorithmic responses to user behavior and the evolving dialectic between user behavior and algorithmic recommendations.

Contributors to the edited volume on systemic racism and computation

The essays are based on the presentations given during the ICSR workshop series and explore the potential for data to both combat and perpetuate systemic racism in the U.S. Topics included healthcare inequities, policing, algorithm bias, and more.

© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |