官术网_书友最值得收藏!

Understanding TD learning

We will first learn about TD learning. This is a very fundamental concept in RL. In TD learning, the learning of the agent is attained by experience. Several trial episodes are undertaken of the environment, and the rewards accrued are used to update the value functions. Specifically, the agent will keep an update of the state-action value functions as it experiences new states/actions. The Bellman equation is used to update this state-action value function, and the goal is to minimize the TD error. This essentially means the agent is reducing its uncertainty of which action is the optimal action in a given state; it gains confidence on the optimal action in a given state by lowering the TD error. 

主站蜘蛛池模板: 新邵县| 泰兴市| 丘北县| 清镇市| 泰来县| 辽中县| 江都市| 米易县| 吉林省| 临颍县| 老河口市| 武穴市| 金寨县| 临沧市| 甘南县| 弋阳县| 贺兰县| 临清市| 恭城| 黄梅县| 南溪县| 繁峙县| 富蕴县| 福泉市| 松滋市| 九龙县| 南城县| 集贤县| 太湖县| 梅州市| 乌鲁木齐县| 调兵山市| 乐都县| 枣强县| 宁武县| 景德镇市| 务川| 六盘水市| 武鸣县| 大埔县| 德化县|