1. Home
  2. Medical News
  3. Health Policy
advertisement

Uncovering Bias in AI Pathology Models

uncovering bias in ai pathology models implications and innovations
12/18/2025

Harvard Medical School researchers have identified demographic bias in pathology AI models that measurably reduced diagnostic performance across patient groups. Models inferred race, age, and sex from slides and used those signals in decision-making, producing unequal accuracy at diagnosis and creating a tangible risk to diagnostic equity.

Analysis of a multi-institutional slide repository spanning 20 cancer types showed unequal model performance concentrated in lung and breast cancer tasks, with detectable disparities by race, age, and sex. The investigators report model underperformance for African American and some male patients on certain lung‑cancer subtype tasks, and for younger patients on several breast‑cancer subtype distinctions, with disparities present in roughly 29% of diagnostic comparisons.

Underrepresentation, differential disease incidence, and subtle molecular differences across groups explained much of the gap.

The team tested mitigation that combined standard approaches—data balancing and reweighting—with a contrastive‑learning innovation operationalized as the FAIR‑Path framework. That strategy reduced the demographic signal the models exploited and improved parity across target diagnostic categories, with reported reductions in measured disparity consistent with the investigators' findings. The technique therefore offers a practical route to fairer models but still requires external validation across diverse cohorts and small‑sample settings.

Register

We’re glad to see you’re enjoying ReachMD…
but how about a more personalized experience?

Register for free