Autoencoder

A schema of an autoencoder. An autoencoder has two main parts: an encoder that maps the message to a code, and a decoder that reconstructs the message from the code.

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms.[1]

Variants exist which aim to make the learned representations assume useful properties.[2] Examples are regularized autoencoders (sparse, denoising and contractive autoencoders), which are effective in learning representations for subsequent classification tasks,[3] and variational autoencoders, which can be used as generative models.[4] Autoencoders are applied to many problems, including facial recognition,[5] feature detection,[6] anomaly detection, and learning the meaning of words.[7][8] In terms of data synthesis, autoencoders can also be used to randomly generate new data that is similar to the input (training) data.[6]

  1. ^ Bank, Dor; Koenigstein, Noam; Giryes, Raja (2023). "Autoencoders". In Rokach, Lior; Maimon, Oded; Shmueli, Erez (eds.). Machine learning for data science handbook. pp. 353–374. doi:10.1007/978-3-031-24628-9_16. ISBN 978-3-031-24627-2.
  2. ^ Cite error: The named reference :0 was invoked but never defined (see the help page).
  3. ^ Cite error: The named reference :4 was invoked but never defined (see the help page).
  4. ^ Welling, Max; Kingma, Diederik P. (2019). "An Introduction to Variational Autoencoders". Foundations and Trends in Machine Learning. 12 (4): 307–392. arXiv:1906.02691. Bibcode:2019arXiv190602691K. doi:10.1561/2200000056. S2CID 174802445.
  5. ^ Hinton GE, Krizhevsky A, Wang SD. Transforming auto-encoders. In International Conference on Artificial Neural Networks 2011 Jun 14 (pp. 44-51). Springer, Berlin, Heidelberg.
  6. ^ a b Géron, Aurélien (2019). Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow. Canada: O’Reilly Media, Inc. pp. 739–740.
  7. ^ Liou, Cheng-Yuan; Huang, Jau-Chi; Yang, Wen-Chie (2008). "Modeling word perception using the Elman network". Neurocomputing. 71 (16–18): 3150. doi:10.1016/j.neucom.2008.04.030.
  8. ^ Liou, Cheng-Yuan; Cheng, Wei-Chen; Liou, Jiun-Wei; Liou, Daw-Ran (2014). "Autoencoder for words". Neurocomputing. 139: 84–96. doi:10.1016/j.neucom.2013.09.055.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne