官术网_书友最值得收藏!

Formulating the RL problem

The basic problem that is solved is training a model to make predictions of some pre-defined task without any labeled data. This is accomplished by a trial-and-error approach, akin to a baby learning to walk for the first time. A baby, curious to explore the world around them, first crawls out of their crib not knowing where to go nor what to do. Initially, they take small steps, make mistakes, keep falling on the floor, and cry. But, after many such episodes, they start to stand on their feet on their own, much to the delight of their parents. Then, with a giant leap of faith, they start to take slightly longer steps, slowly and cautiously. They still make mistakes, albeit fewer than before.

After many more such tries—and failures—they gain more confidence that enables them to take even longer steps. With time, these steps get much longer and faster, until eventually, they start to run. And that's how they grow up into a child. Was any labeled data provided to them that they used to learn to walk? No. they learned by trial and error, making mistakes along the way, learning from them, and getting better at it with infinitesimal gains made for every attempt. This is how RL works, learning by trial and error.

Building on the preceding example, here is another situation. Suppose you need to train a robot using trial and error, this is how to do it. Let the robot wander randomly in the environment initially. The good and bad actions are collected and a reward function is used to quantify them, thus, a good action performed in a state will have high rewards; on the other hand, bad actions will be penalized. This can be used as a learning signal for the robot to improve itself. After many such episodes of trial and error, the robot would have learned the best action to perform in a given state, based on the reward. This is how learning in RL works. But we will not talk about human characters for the rest of the book. The child described previously is the agent, and their surroundings are the environment in RL parlance. The agent interacts with the environment and, in the process, learns to undertake a task, for which the environment will provide a reward.

主站蜘蛛池模板: 佛教| 武汉市| 阿勒泰市| 松滋市| 曲周县| 枣强县| 万荣县| 邛崃市| 会昌县| 额尔古纳市| 镇原县| 苍山县| 灌南县| 阿图什市| 岳阳市| 南投县| 蒲江县| 东城区| 博兴县| 马龙县| 健康| 绥滨县| 普格县| 深泽县| 建瓯市| 文安县| 桂东县| 兴义市| 南木林县| 天长市| 湘潭县| 宜宾县| 那曲县| 西宁市| 禄丰县| 临颍县| 洛阳市| 成武县| 永定县| 锡林浩特市| 扬中市|