Health is important, and improvements in health improve lives. However, we still don’t fundamentally understand what it means to be healthy, and the same patient may receive different treatments across different hospitals or clinicians as new evidence is discovered, or individual illness is interpreted. Unlike many problems in machine learning – games like Go, self-driving cars, object recognition – disease management does not have well-defined rewards that can be used to learn rules. Models must also work to not learn biased rules or recommendations that harm minorities or minoritized populations. These projects tackle the many novel technical opportunities for machine learning in health, and work to make important progress in health and health equity.
Clinical notes are becoming an increasingly important data source for machine learning (ML) applications in healthcare. Prior research has shown that deploying ML models can perpetuate existing biases against racial minorities, as bias can be implicitly embedded in data. In this study, we investigate the level of implicit race information available to ML models and human experts and the implications of model-detectable differences in clinical notes. Read more. This paper was accepted by the AAAI / ACM Conference on AI, Ethics, and Society 2022.
Published work
Ghassemi, M. Presentation matters for AI-generated clinical advice. Nat Hum Behav7, 1833–1835 (2023). https://doi.org/10.1038/s41562-023-01721-7
Elizabeth Bondi-Kelly, Tom Hartvigsen, Lindsay M Sanneman, Swami Sankaranarayanan, Zach Harned, Grace Wickerson, Judy Wawira Gichoya, Lauren Oakden-Rayner, Leo Anthony Celi, Matthew P Lungren, Julie A Shah, and Marzyeh Ghassemi. 2023. Taking Off with AI: Lessons from Aviation for Healthcare. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’23). Association for Computing Machinery, New York, NY, USA, Article 4, 1–14. https://doi.org/10.1145/3617694.3623224
Yuxin Xiao, Shulammite Lim, Tom Joseph Pollard, and Marzyeh Ghassemi. 2023. In the Name of Fairness: Assessing the Bias in Clinical Record De-identification. In 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23), June 12–15, 2023, Chicago, IL, USA. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/3593013.35939
Hammaad Adam, Ming Ying Yang, Kenrick Cato, Ioana Baldini, Charles Senteio, Leo Anthony Celi, Jiaming Zeng, Moninder Singh, and Marzyeh Ghassemi. 2022. Write It Like You See It: Detectable Differences in Clinical Notes by Race Lead to Differential Model Recommendations. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22). Association for Computing Machinery, New York, NY, USA, 7–21. https://doi.org/10.1145/3514094.3534203
Adam, H., Balagopalan, A., Alsentzer, E. et al. Mitigating the impact of biased artificial intelligence in emergency decision-making. Commun Med2, 149 (2022). https://doi.org/10.1038/s43856-022-00214-4
ICSR researchers Hammaad Adam and Fotini Christia receive MIT Racism Research Award in the fund’s inaugural year. The committee “sought ambitious proposals that marshal the best efforts of MITʼs diverse and multi-disciplinary community to think creatively about this deep-rooted problem.”
ICSR Healthcare vertical lead and Assistant Professor Marzyeh Ghassemi examines how the increasing use of artificial intelligence could impact medical care. “When you take state-of-the-art machine-learning methods and systems and then evaluate them on different patient groups, they do not perform equally,” says Ghassemi.
The Healthcare vertical team evaluated the impact of AI models on human decision-making in an emergency setting. Descriptive rather than prescriptive advise from such models can help mitigate poor outcomes for minority groups.
The study is co-led by Healthcare vertical lead Marzyeh Ghassemi, who also works on research led by IDSS SES student Hammaad Adam showing that models can identify race from clinical notes stripped of explicit race indicators.
Leaders in the field of AI and healthcare assessed machine learning techniques that support fairness, personalization and inclusiveness.
People
Lead by Marzyeh Ghassemi, PhD (Assistant Professor, MIT CSAIL and Director Healthy ML), the Healthcare vertical team members are Hammaad Adam (MIT IDSS PhD Student, Social & Engineering Systems), Kenrick Cato (Nurse Researcher, New York-Presbyterian Hospital and Assistant Professor at Columbia University School of Nursing), and Charles Senteio, PhD (Assistant Professor of Library and Information Science, Rutgers).
Get involved
We are very interested in connecting with stakeholders working in the healthcare sector. Please email a description of your project to icsr@mit.edu. If you would like to be a sponsor and support our work, please reach out to idss-engage@mit.edu.