官术网_书友最值得收藏!

Understanding TD learning

We will first learn about TD learning. This is a very fundamental concept in RL. In TD learning, the learning of the agent is attained by experience. Several trial episodes are undertaken of the environment, and the rewards accrued are used to update the value functions. Specifically, the agent will keep an update of the state-action value functions as it experiences new states/actions. The Bellman equation is used to update this state-action value function, and the goal is to minimize the TD error. This essentially means the agent is reducing its uncertainty of which action is the optimal action in a given state; it gains confidence on the optimal action in a given state by lowering the TD error. 

主站蜘蛛池模板: 武夷山市| 五河县| 芷江| 宁乡县| 深水埗区| 乌拉特前旗| 西昌市| 祁连县| 沈阳市| 闻喜县| 阜新| 阳谷县| 遵化市| 无极县| 通海县| 宣威市| 吴江市| 定边县| 临泽县| 界首市| 东乌珠穆沁旗| 罗定市| 称多县| 屏东市| 汶川县| 来安县| 甘德县| 陆川县| 浮山县| 台湾省| 延边| 尼木县| 柘荣县| 绥德县| 南安市| 常德市| 额尔古纳市| 永仁县| 迭部县| 泰州市| 达孜县|