- Hands-On Deep Learning Architectures with Python
- Yuxi (Hayden) Liu Saransh Mehta
- 126字
- 2021-06-24 14:48:10
Encoder-decoder structure
Neural machine translation models are Recurrent Neural Networks (RNN), arranged in encoder-decoder fashion. The encoder network takes in variable length input sequences through RNN and encodes the sequences into a fixed size vector. The decoder begins with this encoded vector and starts generating translation word by word, until it predicts the end of sentence. The whole architecture is trained end-to-end with input sentence and correct output translation. The major advantage of these systems (apart from the capability to handle variable input size) is that they learn the context of a sentence and predict accordingly, rather than making a word-to-word translation. Neural machine translation can be best seen in action on Google translate in the following screenshot:

- 繪制進程圖:可視化D++語言(第1冊)
- 21天學通PHP
- 數據挖掘實用案例分析
- 計算機網絡應用基礎
- VMware Performance and Capacity Management(Second Edition)
- PostgreSQL 10 Administration Cookbook
- R Data Analysis Projects
- 寒江獨釣:Windows內核安全編程
- Mastering Ansible(Second Edition)
- PHP求職寶典
- Embedded Linux Development using Yocto Projects(Second Edition)
- 中文版Photoshop情境實訓教程
- 從機器學習到無人駕駛
- C# 2.0實例自學手冊
- 從虛擬化到云計算