官术网_书友最值得收藏!

Encoder-decoder structure

Neural machine translation models are Recurrent Neural Networks (RNN), arranged in encoder-decoder fashion. The encoder network takes in variable length input sequences through RNN and encodes the sequences into a fixed size vector. The decoder begins with this encoded vector and starts generating translation word by word, until it predicts the end of sentence. The whole architecture is trained end-to-end with input sentence and correct output translation. The major advantage of these systems (apart from the capability to handle variable input size) is that they learn the context of a sentence and predict accordingly, rather than making a word-to-word translation. Neural machine translation can be best seen in action on Google translate in the following screenshot:

Sourced from Google translate
主站蜘蛛池模板: 虹口区| 绍兴市| 马边| 布尔津县| 长兴县| 尚义县| 湘潭市| 黑河市| 丰原市| 东乡县| 达尔| 渑池县| 伽师县| 定远县| 弥渡县| 枣阳市| 凯里市| 星座| 台湾省| 兴业县| 工布江达县| 通海县| 宝坻区| 甘南县| 张家港市| 松潘县| 合作市| 墨脱县| 渝北区| 鄢陵县| 昌邑市| 分宜县| 如皋市| 陆丰市| 昌乐县| 铜陵市| 科技| 文化| 连平县| 定襄县| 西乡县|