Google Neural Machine Translation (GNMT) was a neural machine translation (NMT) system developed by Google and introduced in November 2016 that used an artificial neural network to increase fluency and accuracy in Google Translate.[1][2][3][4] The neural network consisted of two main blocks, an encoder and a decoder, both of LSTM architecture with 8 1024-wide layers each and a simple 1-layer 1024-wide feedforwardattention mechanism connecting them.[4][5] The total number of parameters has been variously described as over 160 million,[6]approximately 210 million,[7] 278 million[8] or 380 million.[9] By 2020, the system had been replaced by another deep learning system based on transformers.[10]
GNMT improved on the quality of translation by applying an example-based (EBMT) machine translation method in which the system learns from millions of examples of language translation.[2] GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate.[2] With the large end-to-end framework, the system learns over time to create better, more natural translations.[1] GNMT attempts to translate whole sentences at a time, rather than just piece by piece.[1] The GNMT network can undertake interlingual machine translation by encoding the semantics of the sentence, rather than by memorizing phrase-to-phrase translations.[2][11]
^ abWu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad (2016). "Google's neural machine translation system: Bridging the gap between human and machine translation". arXiv:1609.08144. Bibcode:2016arXiv160908144W. {{cite journal}}: Cite journal requires |journal= (help)
^Boitet, Christian; Blanchon, Hervé; Seligman, Mark; Bellynck, Valérie (2010). "MT on and for the Web"(PDF). Archived from the original(PDF) on March 29, 2017. Retrieved December 1, 2016.