- TensorFlow Reinforcement Learning Quick Start Guide
- Kaushik Balakrishnan
- 260字
- 2021-06-24 15:29:06
Identifying episodes
We mentioned earlier that the agent explores the environment in numerous trials-and-errors before it can learn to maximize its goals. Each such trial from start to finish is called an episode. The start location may or may not always be from the same location. Likewise, the finish or end of the episode can be a happy or sad ending.
A happy, or good, ending can be when the agent accomplishes its pre-defined goal, which could be successfully navigating to a final destination for a mobile robot, or successfully picking up a peg and placing it in a hole for an industrial robot arm, and so on. Episodes can also have a sad ending, where the agent crashes into obstacles or gets trapped in a maze, unable to get out of it, and so on.
In many RL problems, an upper bound in the form of a fixed number of time steps is generally specified for terminating an episode, although in others, no such bound exists and the episode can last for a very long time, ending with the accomplishment of a goal or by crashing into obstacles or falling off a cliff, or something similar. The Voyager spacecraft was launched by NASA in 1977, and has traveled outside our solar system – this is an example of a system with an infinite time episode.
We will next find out what a reward function is and why we need to discount future rewards. This reward function is the key, as it is the signal for the agent to learn.
- 后稀缺:自動化與未來工作
- 21小時學通AutoCAD
- Natural Language Processing Fundamentals
- TIBCO Spotfire:A Comprehensive Primer(Second Edition)
- VMware Performance and Capacity Management(Second Edition)
- WordPress Theme Development Beginner's Guide(Third Edition)
- 計算機系統結構
- Machine Learning with Apache Spark Quick Start Guide
- 突破,Objective-C開發速學手冊
- 走近大數據
- Excel 2010函數與公式速查手冊
- 重估:人工智能與賦能社會
- FANUC工業機器人配置與編程技術
- 自適應學習:人工智能時代的教育革命
- Adobe Edge Quickstart Guide