Submitting a Job - When a client submits a job, its configuration information is packaged into an XML file.
This file, along with the .jar file containing the actual program code, is handed to the JobTracker - The JobTracker then parcels out individual tasks to TaskTracker nodes - When a TaskTracker receives a request to run a task, it instantiates a separate JVM for that task - TaskTracker nodes can be configured to run multiple tasks at the same time - If the node has enough processing power and memory
Refer HadoopExam.com Recorded Training Module : 3 and 4
The intermediate data is held on the TaskTrackers local disk - As Reducers start up, the intermediate data is distributed across the network to the Reducers - Reducers write their final output to HDFS - Once the job has completed, the TaskTracker can delete the intermediate data from its local disk - Note that the intermediate data is not deleted until the entire job completes
Refer HadoopExam.com Recorded Training Module : 2,3 and 4
Question : The Intermediate data is held on the TaskTrackers local disk ? 1. True 2. False
The intermediate data is held on the TaskTrackers local disk - As Reducers start up, the intermediate data is distributed across the network to the Reducers - Reducers write their final output to HDFS - Once the job has completed, the TaskTracker can delete the intermediate data from its local disk - Note that the intermediate data is not deleted until the entire job completes
Refer HadoopExam.com Recorded Training Module : 2,3 and 4