Report

Reducing the Dimensionality of Data with Neural Networks

See allHide authors and affiliations

Science  28 Jul 2006:
Vol. 313, Issue 5786, pp. 504-507
DOI: 10.1126/science.1127647

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed.  Please read our Terms of Service before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Vertical Tabs

  • Reducing the dimensionality of time-series data with deep learning techniques

    High-dimensional time series data can be encoded as low-dimensional time series data by the combination of recurrent neural networks and autoencoder networks. A small central hidden layer can be structured in the multilayer recurrent neural network where the high-dimensional sequential inputs are the same as the high-dimensional sequential outputs. The features of the past sequence elements can be extracted and encoded as state vectors by the multilayer recurrent neural network. Furthermore, the high-dimensional sequential input vectors and the state vectors can be transformed into low-dimensional vectors for the extraction of sequence features and the avoidance of overfitting. A nonlinear function (e.g., a sigmoid function, a rectified linear unit function, etc.) can be used as the activation function of each neuron in the multilayer recurrent neural network for generating a nonlinear solution, and a optimizer (e.g., a stochastic gradient descent optimizer) can be applied to update each weight in the multilayer recurrent neural network.
    The features of high-dimensional data in time series applications (e.g., speech recognition, semantic recognition, etc.) can be extracted as low-dimensional time series data in the pre-training stage for the avoidance of overfitting. Furthermore, some modifications of neural network structure can be applied to improve the performance. For instance, the long short-term memory method can be applied instead of recurrent neural networks,...

    Show More
    Competing Interests: None declared.

Stay Connected to Science