官术网_书友最值得收藏!

Summary

In this chapter, we were introduced to the basic concepts of RL. We understood the relationship between an agent and its environment, and also learned about the MDP setting. We learned the concept of reward functions and the use of discounted rewards, as well as the idea of value and advantage functions. In addition, we saw the Bellman equation and how it is used in RL. We also learned the difference between an on-policy and an off-policy RL algorithm. Furthermore, we examined the distinction between model-free and model-based RL algorithms. All of this lays the groundwork for us to delve deeper into RL algorithms and how we can use them to train agents for a given task.

In the next chapter, we will investigate our first two RL algorithms: Q-learning and SARSA. Note that in Chapter 2, Temporal Difference, SARSA, and Q-Learning, we will be using Python-based agents as they are tabular-learning. But from Chapter 3, Deep Q-Network, onward, we will be using TensorFlow to code deep RL agents, as we will require neural networks.

主站蜘蛛池模板: 建德市| 庄浪县| 永宁县| 探索| 离岛区| 潼关县| 田东县| 安平县| 西和县| 溆浦县| 固阳县| 石狮市| 翁源县| 沙田区| 东宁县| 克东县| 游戏| 新昌县| 麦盖提县| 嘉荫县| 台南市| 景德镇市| 巴里| 香河县| 射洪县| 千阳县| 左权县| 抚宁县| 绥棱县| 凌海市| 庐江县| 宜黄县| 海安县| 东光县| 苗栗市| 布尔津县| 华亭县| 襄樊市| 江津市| 仪征市| 新化县|