官术网_书友最值得收藏!

Processing the flow of application submission in YARN

The following steps follow the flow of application submission in YARN:

  1. Using a client or APIs, the user submits the application; let's say a Spark job jar. ResourceManager, whose primary task is to gather and report all the applications running on the entire Hadoop cluster and available resources on respective Hadoop nodes, depending on the privileges of the user submitting the job, accepts the newly submitted task.
  2. After this RM delegates the task to a scheduler, the scheduler then searches for a container which can host the application-specific Application Master. While the scheduler does take into consideration parameters such as availability of resources, task priority, data locality, and so on, before scheduling or launching an Application Master, it has no role in monitoring or restarting a failed job. It is the responsibility of RM to keep track of an AM and restart it in a new container if it fails.
  3. Once the ApplicationMaster gets launched it becomes the prerogative of the AM to oversee the resources negotiation with RM for launching task-specific containers. Negotiations with RM are typically over:
    • The priority of the tasks at hand.
    • The number of containers to be launched to complete the tasks.
    • The resources needed to execute the tasks, such as RAM and CPU (since Hadoop 3.x).
    • The available nodes where job containers can be launched with the required resources.

Depending on the priority and availability of resources the RM grants containers represented by the container ID and hostname of the node on which it can be launched.

  1. The AM then requests the NM of the respective hosts to launch the containers with specific IDs and resource configuration. The NM then launches the containers but keeps a watch on the resources usage of the task. If, for example, the container starts utilizing more resources than it has been provisioned then that container is killed by the NM. This greatly improves the job isolation and fair sharing of resources guarantee that YARN provides as, otherwise, it would have impacted the execution of other containers. However, it is important to note that the job status and application status as a whole are managed by the AM. It falls in the domain of the AM to continuously monitor any delay or dead containers, simultaneously negotiating with RM to launch new containers to reassign the task of dead containers.
  2. The containers executing on different nodes send application-specific statistics to the AM at specific intervals.
  3. The AM also reports the status of the application directly to the client that submitted the specific application, in our case a Spark job.
  4. The NM monitors the resources being utilized by all the containers on the respective nodes and keeps sending a periodic update to RM.
  5. The AM sends periodic statistics such application status, task failure, and log information to RM.
主站蜘蛛池模板: 中宁县| 浦北县| 儋州市| 吉安县| 杭州市| 青浦区| 保定市| 高陵县| 五华县| 大洼县| 玛多县| 舞阳县| 望都县| 彭州市| 徐州市| 寿光市| 临清市| 泊头市| 鸡泽县| 绥滨县| 南乐县| 南郑县| 赤城县| 甘孜| 濉溪县| 醴陵市| 广元市| 津市市| 卢湾区| 文水县| 大名县| 扶绥县| 四会市| 大邑县| 新丰县| 民丰县| 景洪市| 新郑市| 二连浩特市| 廉江市| 容城县|