Report

A neural algorithm for a fundamental computing problem

See allHide authors and affiliations

Science  10 Nov 2017:
Vol. 358, Issue 6364, pp. 793-796
DOI: 10.1126/science.aam9868
  • Fig. 1 Mapping between the fly olfactory circuit and locality-sensitive hashing (LSH).

    (A) Schematic of the fly olfactory circuit. In step 1, 50 ORNs in the fly’s nose send axons to 50 PNs in the glomeruli; as a result of this projection, each odor is represented by an exponential distribution of firing rates, with the same mean for all odors and all odor concentrations. In step 2, the PNs expand the dimensionality, projecting to 2000 KCs connected by a sparse, binary random projection matrix. In step 3, the KCs receive feedback inhibition from the anterior paired lateral (APL) neuron, which leaves only the top 5% of KCs to remain firing spikes for the odor. This 5% corresponds to the tag (hash) for the odor. (B) Illustrative odor responses. Similar pairs of odors (e.g., methanol and ethanol) are assigned more similar tags than are dissimilar odors. Darker shading denotes higher activity. (C) Differences between conventional LSH and the fly algorithm. In the example, the computational complexity for LSH and the fly are the same. The input dimensionality d = 5. LSH computes m = 3 random projections, each of which requires 10 operations (five multiplications plus five additions). The fly computes m = 15 random projections, each of which requires two addition operations. Thus, both require 30 total operations. x, input feature vector; r, Gaussian random variable; w, bin width constant for discretization.

  • Fig. 2 Empirical comparison of different random projection types and tag-selection methods.

    In all plots, the x axis is the length of the hash, and the y axis is the mean average precision denoting how accurately the true nearest neighbors are found (higher is better). (A) Sparse, binary random projections offer near-identical performance to that of dense, Gaussian random projections, but the former provide a large savings in computation. (B) The expanded-dimension (from k to 20k) plus winner-take-all (WTA) sparsification further boosts performance relative to non-expansion. Results are consistent across all three benchmark data sets. Error bars indicate standard deviation over 50 trials.

  • Fig. 3 Overall comparison between the fly algorithm and LSH.

    In all plots, the x axis is the length of the hash, and the y axis is the mean average precision (higher is better). A 10d expansion was used for the fly. Across all three data sets, the fly’s method outperforms LSH, most prominently for short hash lengths. Error bars indicate standard deviation over 50 trials.

  • Table 1 The generality of locality-sensitive hashing in the brain.

    Shown are the steps used in the fly olfactory circuit and their potential analogs in vertebrate brain regions.

    Step 1Random projectionStep 2 (expansion)Step 3 (WTA)
    Fly olfactionAntennae lobe; 50 glomeruliSparse, binary; samples six glomeruliMushroom body; 2000 Kenyon cellsAPL neuron;
    top 5%
    Mouse olfactionOlfactory bulb; 1000 glomeruliDense, weak; samples all glomeruliPiriform cortex; 100,000 semi-lunar cellsLayer 2A;
    top 10%
    Rat cerebellumPrecerebellar nucleiSparse, binary; samples four
    precerebellar nuceli
    Granule cell layer; 250 million granule cellsGolgi cells;
    top 10 to 20%
    Rat hippocampusEntorhinal cortex; 30,000 grid cellsUnknownDentate gyrus; 1.2 million granule cellsHilar cells;
    top 2%

Supplementary Materials

  • A neural algorithm for a fundamental computing problem

    Sanjoy Dasgupta, Charles F. Stevens, Saket Navlakha

    Materials/Methods, Supplementary Text, Tables, Figures, and/or References

    Download Supplement
    • Materials and Methods
    • Supplementary Text
    • Figs. S1 to S3
    • References

Navigate This Article