Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : TaskTracker runs all the MapTask in the same JVM, if machine has enough processing power and Memory

  : TaskTracker runs all the MapTask in the same JVM, if machine has enough processing power and Memory
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :

Submitting a Job
- When a client submits a job, its configuration information is packaged into an XML file.

This file, along with the .jar file containing the actual program code, is handed to the JobTracker
- The JobTracker then parcels out individual tasks to TaskTracker nodes
- When a TaskTracker receives a request to run a task, it instantiates a separate JVM for that task
- TaskTracker nodes can be configured to run multiple tasks at the same time
- If the node has enough processing power and memory

Refer HadoopExam.com Recorded Training Module : 3 and 4





Question : Select the correct statement


  : Select the correct statement
1. While job is running the intermediate data is keep deleted
2. Reducers write their final output to HDFS
3. Access Mostly Uused Products by 50000+ Subscribers
4. All 1,2 and 3 are correct
5. None of the above


Correct Answer : Get Lastest Questions and Answer s:

Explanation: Intermedate Data

The intermediate data is held on the TaskTrackers local disk
- As Reducers start up, the intermediate data is distributed across the network to the Reducers
- Reducers write their final output to HDFS
- Once the job has completed, the TaskTracker can delete the intermediate data from its local disk
- Note that the intermediate data is not deleted until the entire job completes

Refer HadoopExam.com Recorded Training Module : 2,3 and 4





Question : The Intermediate data is held on the TaskTrackers local disk ?
  : The Intermediate data is held on the TaskTrackers local disk ?
1. True
2. False


Correct Answer : Get Lastest Questions and Answer :

Intermedate Data

The intermediate data is held on the TaskTrackers local disk
- As Reducers start up, the intermediate data is distributed across the network to the Reducers
- Reducers write their final output to HDFS
- Once the job has completed, the TaskTracker can delete the intermediate data from its local disk
- Note that the intermediate data is not deleted until the entire job completes

Refer HadoopExam.com Recorded Training Module : 2,3 and 4



Related Questions


Question : Using the Combiner will increase the network overhead ?

  : Using the Combiner will increase the network overhead ?
1. True
2. False


Question : A combiner reduce the amount of data sent to the Reducer ?

  : A combiner reduce the amount of data sent to the Reducer ?
1. True
2. False


Question : Combiner reduces the network traffic but increases the amount of work needed to be done by the reducer ?

  : Combiner reduces the network traffic but increases the amount of work needed to be done by the reducer ?
1. True
2. False


Question : Which is the correct for Pseudo-Distributed mode of the Hadoop

 : Which is the correct for Pseudo-Distributed mode of the Hadoop
1. This a single machine cluster
2. All daemons run on the same machine
3. Access Mostly Uused Products by 50000+ Subscribers
4. All 1,2 and 3 are correct
5. Only 1 and 2 are correct





Question : Which daemon is responsible for the Housekeeping of the NameNode ?
  : Which daemon is responsible for the Housekeeping of the NameNode ?
1. JobTracker
2. Tasktracker
3. Access Mostly Uused Products by 50000+ Subscribers
4. Secondary NameNode




Question : Which daemon is responsible for instantiating and monitoring individual Map and Reduce Task
  : Which daemon is responsible for instantiating  and monitoring individual  Map and Reduce Task
1. JobTracker
2. TaskTracker
3. Access Mostly Uused Products by 50000+ Subscribers
4. DataNode