- PyTorch 1.x Reinforcement Learning Cookbook
- Yuxi (Hayden) Liu
- 76字
- 2021-06-24 12:34:44
There's more...
In fact, irrespective of the initial state the process was in, the state distribution will always converge to [0.5714, 0.4286]. You could test with other initial distributions, such as [0.2, 0.8] and [1, 0]. The distribution will remain [0.5714, 0.4286] after 10 steps.
A Markov chain does not necessarily converge, especially when it contains transient or current states. But if it does converge, it will reach the same equilibrium regardless of the starting distribution.
推薦閱讀
- 基于C語(yǔ)言的程序設(shè)計(jì)
- Mastering Proxmox(Third Edition)
- 精通MATLAB神經(jīng)網(wǎng)絡(luò)
- 網(wǎng)上沖浪
- 自動(dòng)生產(chǎn)線的拆裝與調(diào)試
- 工業(yè)機(jī)器人運(yùn)動(dòng)仿真編程實(shí)踐:基于Android和OpenGL
- Grome Terrain Modeling with Ogre3D,UDK,and Unity3D
- 氣動(dòng)系統(tǒng)裝調(diào)與PLC控制
- Word 2007,Excel 2007辦公應(yīng)用融會(huì)貫通
- The DevOps 2.1 Toolkit:Docker Swarm
- WOW!Photoshop CS6完全自學(xué)寶典
- 菜鳥起飛電腦組裝·維護(hù)與故障排查
- Practical Network Automation
- Cisco UCS Cookbook
- 多媒體技術(shù)應(yīng)用教程