Reducing Biases towards Minoritized Populations in Medical Curricular Content via AI for Fairer Health Outcomes

Dr. Shiri Dori-Hacohen, University of Connecticut

Biased information (recently termed bisinformation) continues to be taught in medical curricula, often long after having been debunked. In this paper, we introduce BRICC, a first-in-class initiative that seeks to mitigate medical bisinformation using machine learning to systematically identify and flag text with potential biases, for subsequent review in an expert-in-the-loop fashion, thus greatly accelerating an otherwise labor-intensive process. A gold-standard BRICC dataset was developed throughout several years, and contains over 12K pages of instructional materials. Medical experts meticulously annotated these documents for bias according to comprehensive coding guidelines, emphasizing gender, sex, age, geography, ethnicity, and race. Using this labeled dataset, we trained, validated, and tested medical bias classifiers. We test three classifier approaches: a binary type-specific classifier, a general bias classifier; an ensemble combining bias type-specific classifiers independently-trained; and a multi-task learning (MTL) model tasked with predicting both general and type-specific biases. While MTL led to some improvement on race bias detection in terms of F1-score, it did not outperform binary classifiers trained specifically on each task. On general bias detection, the binary classifier achieves up to 0.923 of AUC, a 27.8% improvement over the baseline. This work lays the foundations for debiasing medical curricula by exploring a novel dataset and evaluating different training model strategies, offering new pathways for more nuanced and effective mitigation of bisinformation.

Dr. Shiri Dori-Hacohen

Dr. Shiri Dori-Hacohen is an Assistant Professor at the School of Computing at the University of Connecticut, where she leads the Reducing Information Ecosystem Threats (RIET) Lab. Her research focuses on threats to the information ecosystem online and the sociotechnical AI alignment problem, while fostering transdisciplinary collaborations with experts spanning medicine, public health, the social sciences, and the humanities. She has served as PI or Co-PI on $7.7M worth of federal funds from the National Science Foundation. Her career in academia, startup, and industry spans Google, Facebook, and as Founder/CEO of a startup, among others. She received her M.Sc. and B.Sc. (cum laude) at the University of Haifa in Israel and her M.S. and Ph.D. from the University of Massachusetts Amherst, where she researched computational models of controversy. Dr. Dori-Hacohen is the recipient of several prestigious awards, including first place at the 2016 UMass Amherst’s Innovation Challenge. Her AI safety & ethics work has won the AI Risk Analysis Award at the NeurIPS ML Safety workshop, and was cited in the March 2023 AI Open Letter calling for a pause on AI development. Dr. Dori-Hacohen has taken an active leadership role in broadening participation in Computer Science on a local and global scale, and was named to the 2023 D-30 Disability Impact List. She has been quoted and interviewed as an expert in multiple media outlets including Reuters, The Guardian, Forbes, and Galei Tzahal radio (in Hebrew).