PerspectiveARTIFICIAL INTELLIGENCE

In defense of the black box

See allHide authors and affiliations

Science  05 Apr 2019:
Vol. 364, Issue 6435, pp. 26-27
DOI: 10.1126/science.aax0162

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed.  Please read our Terms of Service before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Image CAPTCHA
Enter the characters shown in the image.

Vertical Tabs

  • Black-box in AI is not black box

    Elizabeth A. Holm wrote an article entitled “In defense of the black box “ (1). Holm of CMU is confused with intentional or ignorant black box problems. Many researchers don’t know how to convert deep learning which is called black-box into an explainable decision tree. Geoffrey Everest Hinton, former faculty of CMU (1982–1987) has proposed an idea on how to eliminate the black-box problem (2). Hinton’s algorithm, called the soft decision tree converted from deep learning has been implemented in several open source sites (3, 4). Not only in deep learning based on GPU computing, but also in ensemble methods based on CPU computing, the explainable decision tree function has been implemented in open source machine learning including scikit-learn. In other words, the black box problem in AI can be eliminated if we would like to do. The intentional black box in AI system is another issue.

    References:
    1. E. A. Holm, Science 05 Apr 2019: Vol. 364, Issue 6435, pp. 26-27
    2. https://arxiv.org/abs/1711.09784
    3. https://github.com/kimhc6028/soft-decision-tree
    4. https://github.com/AaronX121/Soft-Decision-Tree

    Competing Interests: None declared.
  • RE: In defense of the black box

    Elizabeth Holm nods to Douglas Adams and his notional ‘Deep Thought’ computer in her defense of ‘black box’ computation. However, Professor Holm makes little effort to help users of ‘black box’ algorithms to distinguish between trustworthiness and truthiness. Even official regulation of commercial ‘black box’ implementations can fail (q. v., Boeing’s 737 Max flight control software) if appropriate engineering choices are overridden by managerial demands.

    At the very least, any and all ‘black box’ software should be distributed and utilized in a state of maximum transparency. This means the developer should provide software documentation as well as use cases. And the user, particularly for publication of results, should provide a full suite of parameter selection and analysis choices. In other words, scientists and engineers must be willing to provide the clarity that that the scientific method demands.

    Competing Interests: None declared.
  • RE: In defense of the black box
    • John Pastor, Professor, Dept. of Biology, University of Minnesota Duluth

    Dear Colleagues,

    As with many things, that esteemed philosopher of science, Richard Feynman, said it best. On numerous occasions, he remarked on how much fun it is to think about the puzzles and problems of science. Will AI take the fun out of thinking about the puzzles of science if we don't understand why the answer which the black box give us is the way it is?

    Competing Interests: None declared.