Policy ForumBiomedical Technology Regulation

Algorithms on regulatory lockdown in medicine

See allHide authors and affiliations

Science  06 Dec 2019:
Vol. 366, Issue 6470, pp. 1202-1204
DOI: 10.1126/science.aay9547

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

Summary

As use of artificial intelligence and machine learning (AI/ML) in medicine continues to grow, regulators face a fundamental problem: After evaluating a medical AI/ML technology and deeming it safe and effective, should the regulator limit its authorization to market only the version of the algorithm that was submitted, or permit marketing of an algorithm that can learn and adapt to new conditions? For drugs and ordinary medical devices, this problem typically does not arise. But it is this capability to continuously evolve that underlies much of the potential benefit of AI/ML. We address this “update problem” and the treatment of “locked” versus “adaptive” algorithms by building on two proposals suggested earlier this year by one prominent regulatory body, the U.S. Food and Drug Administration (FDA) (1, 2), which may play an influential role in how other countries shape their associated regulatory architecture. The emphasis of regulators needs to be on whether AI/ML is overall reliable as applied to new data and on treating similar patients similarly. We describe several features that are specific to and ubiquitous in AI/ML systems and are closely tied to their reliability. To manage the risks associated with these features, regulators should focus particularly on continuous monitoring and risk assessment, and less on articulating ex-ante plans for future algorithm changes.

View Full Text

Stay Connected to Science