Report

Bumble bees display cross-modal object recognition between visual and tactile senses

See allHide authors and affiliations

Science  21 Feb 2020:
Vol. 367, Issue 6480, pp. 910-912
DOI: 10.1126/science.aay8064

You are currently viewing the abstract.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

These bees have “seen” that before

Humans excel at mental imagery, and we can transfer those images across senses. For example, an object out of view, but for which we have a mental image, can still be recognized by touch. Such cross-modal recognition is highly adaptive and has been recently identified in other mammals, but whether it is widespread has been debated. Solvi et al. tested for this behavior in bumble bees, which are increasingly recognized as having some relatively advanced cognitive skills (see the Perspective by von der Emde and Burt de Perera). They found that the bees could identify objects by shape in the dark if they had seen, but not touched, them in the light, and vice versa, demonstrating a clear ability to transmit recognition across senses.

Science, this issue p. 910; see also p. 850

Abstract

Many animals can associate object shapes with incentives. However, such behavior is possible without storing images of shapes in memory that are accessible to more than one sensory modality. One way to explore whether there are modality-independent internal representations of object shapes is to investigate cross-modal recognition—experiencing an object in one sensory modality and later recognizing it in another. We show that bumble bees trained to discriminate two differently shaped objects (cubes and spheres) using only touch (in darkness) or vision (in light, but barred from touching the objects) could subsequently discriminate those same objects using only the other sensory information. Our experiments demonstrate that bumble bees possess the ability to integrate sensory information in a way that requires modality-independent internal representations.

View Full Text

Stay Connected to Science