- Hands-On Deep Learning Architectures with Python
- Yuxi (Hayden) Liu Saransh Mehta
- 126字
- 2021-06-24 14:48:10
Encoder-decoder structure
Neural machine translation models are Recurrent Neural Networks (RNN), arranged in encoder-decoder fashion. The encoder network takes in variable length input sequences through RNN and encodes the sequences into a fixed size vector. The decoder begins with this encoded vector and starts generating translation word by word, until it predicts the end of sentence. The whole architecture is trained end-to-end with input sentence and correct output translation. The major advantage of these systems (apart from the capability to handle variable input size) is that they learn the context of a sentence and predict accordingly, rather than making a word-to-word translation. Neural machine translation can be best seen in action on Google translate in the following screenshot:

- 高效能辦公必修課:Word圖文處理
- 網上沖浪
- 圖形圖像處理(Photoshop)
- 圖解PLC控制系統梯形圖和語句表
- Windows內核原理與實現
- 數據庫系統原理及應用教程(第5版)
- Learning C for Arduino
- Linux服務與安全管理
- Linux:Powerful Server Administration
- LAMP網站開發黃金組合Linux+Apache+MySQL+PHP
- Spark大數據商業實戰三部曲:內核解密|商業案例|性能調優
- 計算機應用基礎實訓(職業模塊)
- 電氣控制及Micro800 PLC程序設計
- Machine Learning in Java
- Keras Reinforcement Learning Projects