EDITORIAL

Emerging from AI utopia

See allHide authors and affiliations

Science  03 Apr 2020:
Vol. 368, Issue 6486, pp. 9
DOI: 10.1126/science.abb9369

Embedded Image
CREDIT: HUMAN RIGHTS COMMISSIONER EDWARD SANTOW © AUSTRALIAN HUMAN RIGHTS COMMISSION

A future driven by artificial intelligence (AI) is often depicted as one paved with improvements across every aspect of life—from health, to jobs, to how we connect. But cracks in this utopia are starting to appear, particularly as we glimpse how AI can also be used to surveil, discriminate, and cause other harms. What existing legal frameworks can protect us from the dark side of this brave new world of technology?


Embedded Image
CREDIT: WILLIAM DUKE/PHOTO BY DAXIAO PRODUCTIONS/SHUTTERSTOCK

Facial recognition is a good example of an AI-driven technology that is starting to have a dramatic human impact. When facial recognition is used to unlock a smartphone, the risk of harm is low, but the stakes are much higher when it is used for policing. In well over a dozen countries, law enforcement agencies have started using facial recognition to identify “suspects” by matching photos scraped from the social media accounts of 3 billion people around the world. Recently, the London Metropolitan Police used the technology to identify 104 suspects, 102 of whom turned out to be “false positives.” In a policing context, the human rights risk is highest because a person can be unlawfully arrested, detained, and ultimately subjected to wrongful prosecution. Moreover, facial recognition errors are not evenly distributed across the community. In Western countries, where there are more readily available data, the technology is far more accurate at identifying white men than any other group, in part because it tends to be trained on datasets of photos that are disproportionately made up of white men. Such uses of AI can cause old problems—like unlawful discrimination—to appear in new forms.

Right now, some countries are using AI and mobile phone data to track people in self-quarantine because of the coronavirus disease 2019 pandemic. The privacy and other impacts of such measures might be justified by the scale of the current crisis, but even in an emergency, human rights must still be protected. Moreover, we will need to ensure that extreme measures do not become the new normal when the period of crisis passes.

It's sometimes said that existing laws in Western countries don't apply in the new world of AI. But this is a myth—laws apply to the use of AI, as they do in every other context. Imagine if a chief executive officer of a company preferred to recruit people of a particular race, unfairly disadvantaging other people, or if a bank offered credit more readily to men than women. Clearly, this is unlawful discrimination. So, why would the legal position be any different if discrimination occurred because these people were similarly disadvantaged by the use of an algorithm?

The laws that many countries already use to protect citizens—including laws that prohibit discrimination—need to be applied more rigorously and effectively in the new technology context. There has been a proliferation of AI ethics frameworks that provide guidance in identifying the ethical implications of new technologies and propose ways to develop and use these technologies for the better. The Australian Human Rights Commission's Human Rights and Technology Discussion Paper acknowledges an important role for ethics frameworks but notes that to date, their practical impact has been limited and cannot be a substitute for applying the law. Although this project has considered how Australia specifically should respond to the challenges of emerging technologies such as AI, the recommendations are general. The Commission sets out practical steps that researchers, government, industry, and regulators should take to ensure that AI is accountable in its development and use. It also suggests targeted reform to fill the gaps that have been exposed by the unprecedented adoption of AI and related technologies. For example, our laws should make crystal clear that momentous decisions—from sentencing in criminal cases to bank loan decisions—cannot be made in a “black box,” whether or not AI is used in the decisionmaking process. And where the risk of harm is particularly severe, such as in the use of facial recognition for policing, the Commission proposes a moratorium in Australia until proper human rights safeguards are in place.

The proposals in the discussion paper are written in pencil rather than ink and are open for public comment until the end of this month (tech.humanrights.gov.au) before the final report is released later this year. AI offers many exciting possibilities and opportunities for humanity, but we need to innovate for good and ensure that what we create benefits everyone.

View Abstract

Stay Connected to Science

Subjects

Navigate This Article