- TensorFlow Reinforcement Learning Quick Start Guide
- Kaushik Balakrishnan
- 196字
- 2021-06-24 15:29:07
Off-policy method
Off-policy methods, on the other hand, use different policies to make action decisions and to evaluate the performance. For instance, many off-policy algorithms use a replay buffer to store the experiences, and sample data from this buffer to train the model. During the training step, a mini-batch of experience data is randomly sampled and used to train the policy and value functions. Coming back to the previous robot example, in an off-policy setting, the robot will not use the current policy to evaluate its performance, but rather use a different policy for exploring and for evaluation. If a replay buffer is used to sample a mini-batch of experience data and then train the agent, then it is off-policy learning, as the current policy of the robot (which was used to obtain the immediate actions) is different from the policy that was used to obtain the samples in the mini-batch of experience used to train the agent (as the policy has changed from an earlier time instant when the data was collected, to the current time instant). DQN, DDQN, and DDPG are off-policy algorithms that we'll look at in later chapters of this book.
- R Data Mining
- Natural Language Processing Fundamentals
- Hadoop 2.x Administration Cookbook
- 一本書玩轉數據分析(雙色圖解版)
- 程序設計語言與編譯
- VMware Performance and Capacity Management(Second Edition)
- JMAG電機電磁仿真分析與實例解析
- 激光選區熔化3D打印技術
- 在實戰中成長:Windows Forms開發之路
- 水晶石影視動畫精粹:After Effects & Nuke 影視后期合成
- 大數據:引爆新的價值點
- FreeCAD [How-to]
- QTP自動化測試實踐
- Practical Network Automation
- 歐姆龍PLC應用系統設計實例精解