- Keras Reinforcement Learning Projects
- Giuseppe Ciaburro
- 345字
- 2021-08-13 15:26:04
Q-learning
Q-learning is one of the most used reinforcement learning algorithms. This is due to its ability to compare the expected utility of the available actions without requiring an environment model. Thanks to this technique, it is possible to find an optimal action for every given state in a finished MDP.
A general solution to the reinforcement learning problem is to estimate, thanks to the learning process, an evaluation function. This function must be able to evaluate, through the sum of the rewards, the optimality/utility or otherwise of a particular policy. In fact, Q-learning tries to maximize the value of the Q function (action-value function), which represents the maximum discounted future reward when we perform actions a in the state s.
Q-learning, like SARSA, estimates the function value q (s, a) incrementally, updating the value of the state-action pair at each step of the environment, following the logic of updating the general formula for estimating the values for the TD methods. Q-learning, unlike SARSA, has off-policy characteristics, that is, while the policy is improved according to the values estimated by q (s, a), the value function updates the estimates following a strictly greedy secondary policy: given a state, the chosen action is always the one that maximizes the value max q (s, a). However, the π policy has an important role in estimating values because, through it, the state-action pairs to be visited and updated are determined.
The following is a pseudocode for a Q-learning algorithm:
Initialize
arbitrary action-value function
Repeat (for each episode)
Initialize s
choose a from s using policy from action-value function
Repeat (for each step in episode)
take action a
observe r, s'
update action-value function
update s
Q-learning uses a table to store each state-action pair. At each step, the agent observes the current state of the environment and, using the π policy, selects and executes the action. By executing the action, the agent obtains the reward Rt+1 and the new state St+1. At this point the agent is able to calculate Q(St, at), updating the estimate.
- 大學計算機基礎:基礎理論篇
- Python Artificial Intelligence Projects for Beginners
- 傳感器技術實驗教程
- 實時流計算系統設計與實現
- 西門子S7-200 SMART PLC從入門到精通
- 快學Flash動畫百例
- 小型電動機實用設計手冊
- PyTorch Deep Learning Hands-On
- 現代傳感技術
- 塊數據5.0:數據社會學的理論與方法
- Hands-On Business Intelligence with Qlik Sense
- 機器人剛柔耦合動力學
- Learning Couchbase
- Arduino創意機器人入門:基于ArduBlock(第2版)
- 深度學習500問:AI工程師面試寶典