Health is important, and improvements in health improve lives. However, we still don’t fundamentally understand what it means to be healthy, and the same patient may receive different treatments across different hospitals or clinicians as new evidence is discovered, or individual illness is interpreted. Unlike many problems in machine learning – games like Go, self-driving cars, object recognition – disease management does not have well-defined rewards that can be used to learn rules. Models must also work to not learn biased rules or recommendations that harm minorities or minoritized populations. These projects tackle the many novel technical opportunities for machine learning in health, and work to make important progress in health and health equity.

Current projects

Write It Like You See It

Clinical notes are becoming an increasingly important data source for machine learning (ML) applications in healthcare. Prior research has shown that deploying ML models can perpetuate existing biases against racial minorities, as bias can be implicitly embedded in data. In this study, we investigate the level of implicit race information available to ML models and human experts and the implications of model-detectable differences in clinical notes. Read more.
This paper was accepted by the AAAI / ACM Conference on AI, Ethics, and Society 2022.

Published work

  1. Ghassemi, M. Presentation matters for AI-generated clinical advice. Nat Hum Behav 7, 1833–1835 (2023).
  2. Elizabeth Bondi-Kelly, Tom Hartvigsen, Lindsay M Sanneman, Swami Sankaranarayanan, Zach Harned, Grace Wickerson, Judy Wawira Gichoya, Lauren Oakden-Rayner, Leo Anthony Celi, Matthew P Lungren, Julie A Shah, and Marzyeh Ghassemi. 2023. Taking Off with AI: Lessons from Aviation for Healthcare. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’23). Association for Computing Machinery, New York, NY, USA, Article 4, 1–14.
  3. Yuxin Xiao, Shulammite Lim, Tom Joseph Pollard, and Marzyeh Ghassemi. 2023. In the Name of Fairness: Assessing the Bias in Clinical Record De-identification. In 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23), June 12–15, 2023, Chicago, IL, USA. ACM, New York, NY, USA, 15 pages.
  4. Hammaad Adam, Ming Ying Yang, Kenrick Cato, Ioana Baldini, Charles Senteio, Leo Anthony Celi, Jiaming Zeng, Moninder Singh, and Marzyeh Ghassemi. 2022. Write It Like You See It: Detectable Differences in Clinical Notes by Race Lead to Differential Model Recommendations. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22). Association for Computing Machinery, New York, NY, USA, 7–21.
  5. Adam, H., Balagopalan, A., Alsentzer, E. et al. Mitigating the impact of biased artificial intelligence in emergency decision-making. Commun Med 2, 149 (2022).


MIT Racism Research Award Winners

Protest with carboard sign that says End Systemic Racism

ICSR researchers Hammaad Adam and Fotini Christia receive MIT Racism Research Award in the fund’s inaugural year. The committee “sought ambitious proposals that marshal the best efforts of MITʼs diverse and multi-disciplinary community to think creatively about this deep-rooted problem.”

Will artificial intelligence help — or hurt — medicine? // NPR

Marzyeh Ghassemi

ICSR Healthcare vertical lead and Assistant Professor Marzyeh Ghassemi examines how the increasing use of artificial intelligence could impact medical care. “When you take state-of-the-art machine-learning methods and systems and then evaluate them on different patient groups, they do not perform equally,” says Ghassemi.

Subtle biases in AI can influence emergency decisions

clip art of doctors examining a computer image of a brain

However, harm can be minimized if the advice it delivers is properly framed, an MIT team including members of the ICSR Healthcare vertical has shown.

Mitigating the impact of biased artificial intelligence in emergency decision-making

The Healthcare vertical team evaluated the impact of AI models on human decision-making in an emergency setting. Descriptive rather than prescriptive advise from such models can help mitigate poor outcomes for minority groups.

Artificial intelligence predicts patients’ race from their medical images

The study is co-led by Healthcare vertical lead Marzyeh Ghassemi, who also works on research led by IDSS SES student Hammaad Adam showing that models can identify race from clinical notes stripped of explicit race indicators.

The downside of machine learning in health care

Assistant Professor Marzyeh Ghassemi explores how hidden biases in medical data could compromise artificial intelligence approaches.

AI for Healthcare Equity Conference

Leaders in the field of AI and healthcare assessed machine learning techniques that support fairness, personalization and inclusiveness.


Lead by Marzyeh Ghassemi, PhD (Assistant Professor, MIT CSAIL and Director Healthy ML), the Healthcare vertical team members are Hammaad Adam (MIT IDSS PhD Student, Social & Engineering Systems), Kenrick Cato (Nurse Researcher, New York-Presbyterian Hospital and Assistant Professor at Columbia University School of Nursing), and Charles Senteio, PhD (Assistant Professor of Library and Information Science, Rutgers).

Marzyeh Ghassemi
Marzyeh Ghassemi (MIT – CSAIL) Vertical Lead
Hammaad Adam (MIT – IDSS SES)
Kenrick Cato
(Columbia University)
Charles Senteio
(Rutgers University)

Get involved

We are very interested in connecting with stakeholders working in the healthcare sector. Please email a description of your project to If you would like to be a sponsor and support our work, please reach out to

MIT Institute for Data, Systems, and Society
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307