Let’s take a look at how Google Translate’s Neural Network works behind the scenes! Read these references below for the best understanding of Neural Machine Translation!

SUBSCRIBE TO CODE EMPORIUM: https://www.youtube.com/c/CodeEmporium?sub_confirmation=1

To submit your video to CS Dojo Community, please use this link: https://csdojo.io/enter


[1] Landmark paper of LSTM (Hochreiter et al., 1997): https://www.bioinf.jku.at/publication…
[2] Landmark paper of Neural Machine Translation NMT (Kalchbrenner et al., 2013): https://arxiv.org/abs/1306.3584
[3] Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (Cho et al., 2014): https://arxiv.org/abs/1406.1078
[4] Seq to Seq learning with neural networks (Sutskever et al., 2014): https://arxiv.org/abs/1409.3215)
[5] The paper that introduced Bidirectional RNN : https://pdfs.semanticscholar.org/4b80…
[6] On the properties of NMP: Encoder-Decoder Approaches (Cho et al., 2014): https://arxiv.org/pdf/1409.1259.pdf Fig. 4 (a)
[7] NMT by jointly learning to align & translate (Bahdanau et al., 2016): https://arxiv.org/pdf/1409.0473.pdf 5.2.2
[8] Google Translate Main paper (Wu et al., 2016): https://ai.google/research/pubs/pub45610