官术网_书友最值得收藏!

Encoder-decoder structure

Neural machine translation models are Recurrent Neural Networks (RNN), arranged in encoder-decoder fashion. The encoder network takes in variable length input sequences through RNN and encodes the sequences into a fixed size vector. The decoder begins with this encoded vector and starts generating translation word by word, until it predicts the end of sentence. The whole architecture is trained end-to-end with input sentence and correct output translation. The major advantage of these systems (apart from the capability to handle variable input size) is that they learn the context of a sentence and predict accordingly, rather than making a word-to-word translation. Neural machine translation can be best seen in action on Google translate in the following screenshot:

Sourced from Google translate
主站蜘蛛池模板: 贺兰县| 凤台县| 永福县| 乌兰察布市| 石林| 卓资县| 莒南县| 太湖县| 阿荣旗| 昌平区| 扎兰屯市| 吉安县| 呼图壁县| 杂多县| 开远市| 安宁市| 昌都县| 米易县| 十堰市| 武宁县| 石城县| 北辰区| 布拖县| 瑞金市| 手机| 焉耆| 开化县| 镇安县| 鸡西市| 黔南| 休宁县| 黑水县| 东光县| 林芝县| 阜新市| 昌都县| 葫芦岛市| 黎平县| 迭部县| 平顺县| 青岛市|