Housing policies enacted today are not applied in a vacuum. They are implemented in an environment that has evolved through a long, complex history which includes discriminatory policies and practices put into effect by various actors, ranging from the government to banks to private citizens. These racially inequitable policies include racial covenants, redlining, predatory lending and more. Emerging research shows that AI and algorithmic systems are exacerbating rather than ameliorating these existing inequities. In our work, we take a data-driven approach to think beyond algorithmic fairness and account for historical aspects in evaluating housing equity. Our primary objective in the housing vertical is to reveal mechanisms that lead to racially disparate outcomes in housing and to identify the most impactful intervention points to disrupt these mechanisms. In particular, we are focused on three topics: evictions and housing security; home ownership and lending; and health disparities that result from residential segregation.

The Housing vertical team consists of vertical co-leads Peko Hosoi (IDSS, MechE) and Catherine D’Ignazio (DUSP), Bhavani Ananthabhotla (TPP), Wonyoung So (DUSP), and Aurora Zhang (SES).

Peko Hosoi (IDSS – MechE) Co-lead
Catherine D’Ignazio (DUSP) Co-lead
Bhavani Ananthabhotla (MIT – TPP)
Talla Babou
Talla Babou
Wonyoung So (MIT – DUSP)
Aurora Zhang (MIT – IDSS SES)

Current projects

The Housing vertical research team, in collaboration with MIT UROPs and MRSP students, is currently working on the following projects:

Structural Factors and Racial Disparities in Evictions

What are mechanisms that contribute to racial disparities in evictions? How can we use causal inference methods to evaluate policies that are aimed at reducing evictions? In this poster presentation, authors Aurora Zhang and Anette "Peko" Hosoi find that even after accounting for economic characteristics such as poverty rate, median income, median rent, that there is a significant relationship between neighborhood racial composition and eviction rate. Read more.

Beyond Fairness: Reparative Algorithms to Address Historical Injustices of Housing Discrimination in the US

Fairness in Machine Learning (ML) has mostly focused on interrogating the fairness of a particular decision point with assumptions made that the people represented in the data have been fairly treated throughout history. However, fairness cannot be ultimately achieved if such assumptions are not valid. This is the case for mortgage lending discrimination in the US, which should be critically understood as the result of historically accumulated injustices that were enacted through public policies and private practices including redlining, racial covenants, exclusionary zoning, and predatory inclusion, among others. To emphasize such issues, we introduce case studies using contemporary mortgage lending data as well as historical census data in the US. Read More.



‘Fair’ AI could help redress bias against Black US homebuyers

Boston area-fair housing

The Housing vertical team discusses how AI could guide reparations programmes created to redress decades of US housing discrimination against Black homebuyers.

Get involved

At the Beyond Fairness symposium, we learned the value of “unusual” connections – developing interdisciplinary and cross-domain connections between data scientists, urban planners, activists, practitioners from both academia and industry. These connections will help build a community that can tackle issues of housing justice from many different points of view.

We are very interested in connecting with (local) stakeholders working on projects related to these issues. Please email a description of your project to icsr@mit.edu. If you would like to be a sponsor and support our work, please reach out to idss-engage@mit.edu.

© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |