ReportsPsychology

Semantics derived automatically from language corpora contain human-like biases

See allHide authors and affiliations

Science  14 Apr 2017:
Vol. 356, Issue 6334, pp. 183-186
DOI: 10.1126/science.aal4230

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed.  Please read our Terms of Service before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Image CAPTCHA
Enter the characters shown in the image.

Vertical Tabs

  • RE: Semantics derived automatically from language corpora...

    According to a recent IEEE paper by Allen D. Allen, " The Forbidden Sentient Computer..." it may not be possible to create a truly advanced AI without fully embedding it in a human-like body, even one that grows, and learning from humans. See http://ieeexplore.ieee.org/document/7563377/ and also the author's website http://allendallen.com/Sentient-Computer.html . I would add two points relative to the "Semantics" paper:

    1. Presumably an AI learning from humans would also learn eventually that the above subject notions of prejudice were disadvantageous just as [some] humans have learned it. It would learn from prejudice against AI, among other things.

    2. If an AI is so cleverly designed it has no attachment to human prejudices, then presumably it would not have a prejudice in favor of humans, or life, etc. It is hard to imagine what such an entity would then even do. It certainly would not be controllable since it would hardly seek favor. Possibly over time natural selection would find an instance of an AI which favored its own survival, and not being attached to humans or thinking of itself as human in any way, would become the monster of our AI nightmares which we fear so evidently in literature.

    By not considering or discussing these overarching issues, the "Semantics..." study does science an...

    Show More
    Competing Interests: None declared.