官术网_书友最值得收藏!

Memory and recurrent networks

Memory is often associated with Recurrent Neural Network (RNN), but that is not entirely an accurate association. An RNN is really only useful for storing a sequence of events or what you may refer to as a temporal sense, a sense of time if you will. RNNs do this by persisting state back onto itself in a recursive or recurrent loop. An example of how this looks is shown here:

Unfolded recurrent neural network

What the diagram shows is the internal representation of a recurrent neuron that is set to track a number of time steps or iterations where x represents the input at a time step and h denotes the state. The network weights of W, U, and V remain the same for all time steps and are trained using a technique called Backpropagation Through Time (BPTT). We won't go into the math of BPTT and leave that up the reader to discover on their own, but just realize that the network weights in a recurrent network use a cost gradient method to optimize them. 

A recurrent network allows a neural network to identify sequences of elements and predict what elements typically come next. This has huge applications in predicting text, stocks, and of course games. Pretty much any activity that can benefit from some grasp of time or sequence of events will benefit from using RNN, except standard RNN, the type shown previously, which fails to predict longer sequences due to a problem with gradients. We will get further into this problem and the solution in the next section.

主站蜘蛛池模板: 潜山县| 永宁县| 五大连池市| 阳信县| 南川市| 阜城县| 廊坊市| 嵊州市| 崇阳县| 蓬安县| 梁平县| 龙里县| 灵台县| 徐水县| 游戏| 观塘区| 普兰县| 新和县| 澎湖县| 延庆县| 孝义市| 钟山县| 泌阳县| 无为县| 仙游县| 通辽市| 临漳县| 南丹县| 新河县| 岳阳市| 布尔津县| 哈巴河县| 长沙县| 阿合奇县| 安乡县| 原平市| 孝义市| 河源市| 龙井市| 太仆寺旗| 涟水县|