Health is important, and improvements in health improve lives. However, we still don’t fundamentally understand what it means to be healthy, and the same patient may receive different treatments across different hospitals or clinicians as new evidence is discovered, or individual illness is interpreted. Unlike many problems in machine learning – games like Go, self-driving cars, object recognition – disease management does not have well-defined rewards that can be used to learn rules. Models must also work to not learn biased rules or recommendations that harm minorities or minoritized populations. These projects tackle the many novel technical opportunities for machine learning in health, and work to make important progress in health and health equity.

Lead by Marzyeh Ghassemi, PhD (Assistant Professor, MIT CSAIL and Director Healthy ML), the Healthcare vertical team members are Hammaad Adam (MIT IDSS PhD Student, Social & Engineering Systems), Kenrick Cato (Nurse Researcher, New York-Presbyterian Hospital and Assistant Professor at Columbia University School of Nursing), Charles Senteio, PhD (Assistant Professor of Library and Information Science, Rutgers), and Mingying Yang (MIT Student of Chemical Engineering).

Marzyeh Ghassemi
Marzyeh Ghassemi (MIT – CSAIL) Vertical Lead
Hammaad Adam (MIT – IDSS SES)
Kenrick Cato
(Columbia University)
Charles Senteio
(Rutgers University)
Mingying Yang (MIT – ChemE)

Current projects

Write It Like You See It

Clinical notes are becoming an increasingly important data source for machine learning (ML) applications in healthcare. Prior research has shown that deploying ML models can perpetuate existing biases against racial minorities, as bias can be implicitly embedded in data. In this study, we investigate the level of implicit race information available to ML models and human experts and the implications of model-detectable differences in clinical notes. Read more.
This paper was accepted by the AAAI / ACM Conference on AI, Ethics, and Society 2022.


Mitigating the impact of biased artificial intelligence in emergency decision-making

The Healthcare vertical team evaluated the impact of AI models on human decision-making in an emergency setting. Descriptive rather than prescriptive advise from such models can help mitigate poor outcomes for minority groups.

Artificial intelligence predicts patients’ race from their medical images

The study is co-led by Healthcare vertical lead Marzyeh Ghassemi, who also works on research led by IDSS SES student Hammaad Adam showing that models can identify race from clinical notes stripped of explicit race indicators.

The downside of machine learning in health care

Assistant Professor Marzyeh Ghassemi explores how hidden biases in medical data could compromise artificial intelligence approaches.

AI for Healthcare Equity Conference

Leaders in the field of AI and healthcare assessed machine learning techniques that support fairness, personalization and inclusiveness.

Get involved

We are very interested in connecting with stakeholders working in the healthcare sector. Please email a description of your project to If you would like to be a sponsor and support our work, please reach out to

© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |