Research Article

Dissecting racial bias in an algorithm used to manage the health of populations

See allHide authors and affiliations

Science  25 Oct 2019:
Vol. 366, Issue 6464, pp. 447-453
DOI: 10.1126/science.aax2342

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed.  Please read our Terms of Service before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Vertical Tabs

  • RE: tautology in AI predictor
    • Eran Bellin, Professor of Epidemiology and Population Health, Albert Einstein College of Medicine / Montefiore Information Technology

    The inattentive reader will miss a crucial acknowledgement by Obermeyer et al(1) of a serious tautology inherent in the described analytic enterprise. It goes like this:
    If you build a model to distribute future health care resources based upon billing of present consumption.
    If African American have less access to resource either through functional denial with inadequate resources in their community or self-denial through a lifetime of trained hopelessness to access such services in a meaningful way
    Then, future distributions directed by that model will give more to Whites independent of need.
    To those to whom much has been given, more will be given.

    Obermeyer correctly identifies that the programs enlisting these models “primarily work to prevent acute health decompensations that lead to catastrophic health care utilization”. They are not focused on long term population health that would be more properly invoked 15 years prior to the “catastrophic sentinel event” and directed against prevalent risk factors of hypertension, diabetes, and elevated lipids poorly controlled in African American populations. The unfairness hinted at in the article is the focus on short term benefit of cost reduction in the thirty days post acute decompensation while ignoring the much broader health care needs of the population. Unfortunately, it is the goal that is deviant from health care justice. It is lifelong healt...

    Show More
    Competing Interests: None declared.
  • RE: Dissecting racial bias in an algorithm used to manage the health of populations

    Recent news that an algorithm deciding care for 200 million patients was racially biased is highly concerning. This cautionary tale provides valuable lessons. A hasty rejection of Artificial Intelligence (AI) would be wrong though. AI has immense potential to improve health outcomes across society.

    AI is already transforming health for the better. British researchers have built systems that can accurately diagnose breast cancer and detect eye conditions faster than ever before. In hospitals, we are using AI to proactively identify sepsis cases, thereby saving lives. Reducing readmissions and creating more accurate staffing forecasts is also saving money – essential for cash-strapped health services.

    This story reminds companies to follow AI best practices. Set the right goals – the AI will follow your lead. Train the AI on your data and beware of bias. Avoid using third-party opaque “black-box” tools. Make sure decisions are easy to explain and justifiable. For sensitive areas, keep people involved. Despite the recent news, AI is ethical and trustworthy when used properly.

    James Lawson, AI Evangelist, DataRobot

    Competing Interests: None declared.

Stay Connected to Science

Navigate This Article