Policy ForumMachine Learning

“Explaining” machine learning reveals policy challenges

See allHide authors and affiliations

Science  26 Jun 2020:
Vol. 368, Issue 6498, pp. 1433-1434
DOI: 10.1126/science.aba9647

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed.  Please read our Terms of Service before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Vertical Tabs

  • A new risk of Quantum Machine Learning (QML) in decision-making process
    • Yuichi Hirata, Associate Professor, Hokkaido University, Central Institute of Isotope Science, and Graduate School of Biomedical Science and Engineering

    In decision-making process such as public policy decision-making process, the machine learning (ML) algorithms that learn the relationships between inputs and decision outputs are highly demanded as discussed in (1).

    As also claimed in (1), decision-makers such as policy-makers shall be required to be more explicit about their objectives to be able to “explain” ML systems’ decisions and actions to human users so that they shall satisfy ethical and trustworthy requirements for political accountability or legal compliance.

    However, even though the organizations, in which the decision-makers are members, have explicit objectives to ensure explainability of the ML systems’ decisions and actions for resolving “black box” phenomenon, the ML system may still provide decisions which are not optimized and are out of “intention” of the ML system by technical problems.

    Such new risk due to the technical problems may be provided by the quantum machine learning (QML) algorithms that learn the relationships between inputs and decision outputs which are encoded by quantum mechanical bits for quantum operations in quantum computer.

    For example, such new risk may be provided by the uncertainty of the QML algorithms due to its quantum mechanical effects such as uncertainty principle of quantum mechanics and disruption of the coherence of quantum-superposition states of the QML by the noise as discussed in (2).

    Especially, due to such risk, some decisions...

    Show More
    Competing Interests: None declared.

Stay Connected to Science