Replicator Neural Networks for Universal Optimal Source Coding

Science  29 Sep 1995:
Vol. 269, Issue 5232, pp. 1860-1863
DOI: 10.1126/science.269.5232.1860


Replicator neural networks self-organize by using their inputs as desired outputs; they internally form a compressed representation for the input data. A theorem shows that a class of replicator networks can, through the minimization of mean squared reconstruction error (for instance, by training on raw data examples), carry out optimal data compression for arbitrary data vector sources. Data manifolds, a new general model of data sources, are then introduced and a second theorem shows that, in a practically important limiting case, optimal-compression replicator networks operate by creating an essentially unique natural coordinate system for the manifold.

Related Content