PerspectiveARTIFICIAL INTELLIGENCE

How AI can be a force for good

See allHide authors and affiliations

Science  24 Aug 2018:
Vol. 361, Issue 6404, pp. 751-752
DOI: 10.1126/science.aat5991

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed.  Please read our Terms of Service before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Vertical Tabs

  • AI systems as supplemental agents, rather than primary agents
    • Aninda Saha, Senior Research Associate, Griffith University
    • Other Contributors:
      • Tapan Sarker, Director of Engagement & Senior Lecturer, Griffith University

    In the article “How AI can be a force for good” written by Mariarosaria Taddeo et al., a key focus is the concept of distributed responsibility, which distributes moral responsibility among designers, regulators, and users of AI technologies (1). While this is a promising concept, it would be unrealisable simply because decisions made by AI algorithms are still not well understood (2). Therefore, it would be rather difficult to pinpoint the cause of undesired outcomes. Even if it was understood, the profuse distribution of responsibility among an AI system’s vast network of contributors would not allow anyone to be held accountable for catastrophic outcomes (3). This would actually encourage misuse of these technologies. In addition, many developing countries, particularly those in Asia, with weaker governance bodies will struggle to allocate distributed responsibility of AI related catastrophes (4). In any case, the focus should not be on who should be blamed, but rather how such outcomes can be prevented. The importance of harnessing the AI potential is significant, however, this should not be done without the presence of human oversight. Having a person overseeing AI systems can help to ensure safe operation of automated tasks. As expressed by David Thodey, Chairman of the Commonwealth Science, Industry & Research Organisation (CSIRO), at a recent Committee for Economic Development of Australia (CEDA) convention in Brisbane, Australia, there are certain tasks that i...

    Show More
    Competing Interests: None declared.
  • TPM-like authenticity function is needed for securing AI products

    Mariarosaria Taddeo et al. wrote an article entitled “How AI can be a force for good” (1). The Explainable Artificial Intelligence program of DARPA (Defense Advanced Research Project Agency) is introduced (1). However, many lay public users of the AI technology for autonomous systems do not need to understand the details. They simply want to know the result. The explanations do not help the lay public at all. Translational Ethics with delegation and responsibility approach are also introduced to force AI for good (1). Translational Ethics with delegation and responsibility approach are weak and naive against adversarial and malicious AI developers. Instead of relying on the translational ethics, TPM-like (trusted platform module) robust authenticity functions are needed for securing AI products. According to Wikipedia, TPM is an international standard for a secure cryptoprocessor, a dedicated microcontroller designed to secure hardware through integrated cryptographic keys (2). TPM has been used in PCs and smart phones. However, currently, we do not have any tools for investigating authenticity whether adversarial/malicious AI modules are integrated/included in the system or not. In order to remove adversarial/malicious AI products from our market, robust authenticity regulations are required for verifying and justifying trusted AI systems.

    References:
    1. Mariarosaria Taddeo et al., How AI can be a force for good, Science 24 Aug 2018: Vol. 361, Issue 6404, pp...

    Show More
    Competing Interests: None declared.

Stay Connected to Science

Navigate This Article