Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : Which daemon stores the file data blocks ?

  : Which daemon stores the file data blocks ?
1. NameNode
2. TaskTracker
3. Access Mostly Uused Products by 50000+ Subscribers
4. Secondary Data Node



Correct Answer : Get Lastest Questions and Answer :


Explanation: DataNodes hold the actual blocks
- Each block will be 64MB or 128MB in size
- Each block is replicated three times on the cluster

Refer HadoopExam.com Recorded Training Module : 2 and 16






Question : When a client submits a Job, its configuration information is packaged into XML file

  : When a client submits a Job, its configuration information is packaged into XML file
1. True
2. False



Correct Answer : 1


Explanation: Submitting a Job
- When a client submits a job, its configuration information is packaged into an XML file.

This file, along with the .jar file containing the actual program code, is handed to the JobTracker
- The JobTracker then parcels out individual tasks to TaskTracker nodes
- When a TaskTracker receives a request to run a task, it instantiates a separate JVM for that task
- TaskTracker nodes can be configured to run multiple tasks at the same time
- If the node has enough processing power and memory

Refer HadoopExam.com Recorded Training Module : 3 and 4





Question : TaskTracker can not start multiple task in the same node

  :  TaskTracker can not start multiple task in the same node
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :


Explanation: Submitting a Job
- When a client submits a job, its configuration information is packaged into an XML file.

This file, along with the .jar file containing the actual program code, is handed to the JobTracker
- The JobTracker then parcels out individual tasks to TaskTracker nodes
- When a TaskTracker receives a request to run a task, it instantiates a separate JVM for that task
- TaskTracker nodes can be configured to run multiple tasks at the same time
- If the node has enough processing power and memory

Refer HadoopExam.com Recorded Training Module : 3 and 4



Related Questions


Question : To check the MapR Job Performance, we use MapR Control System. However, a job completed with % map task and % reduce tasks and Job is not finishing.
So you can use MapR control system as
A. You can filter the views in the MapR Control System to list only reduce tasks
B. Once you have a list of your job's reduce tasks, you can sort the list by duration to see if any reduce task attempts are taking an abnormally long time to execute
C. you can not filter the views in the MapR Control System to list only reduce tasks
D. MapR Control System can display detailed information about those task attempts, including log files for those task attempts
 : To check the MapR Job Performance, we use MapR Control System. However, a job completed with % map task and % reduce tasks and Job is not finishing.
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,D


Question : Can we use MapR control system Metrics displays to gauge performance of two different jobs that perform the same function one written n Python using pydoop and other is
written in C++ using Pipes
 : Can we use MapR control system Metrics displays to gauge performance of two different jobs that perform the same function one written  n Python using pydoop and other is
1. Yes
2. No


Question : To use MapR Metrics, set up a ________ database to log metrics data.

 : To use MapR Metrics, set up a ________ database to log metrics data.
1. MySQL

2. Oracle

3. Access Mostly Uused Products by 50000+ Subscribers

4. SQL Server


Question : Hadoop will start transferring the data as soon as Mapper finishes it task and it will not wait till last Map Task finished
 : Hadoop will start transferring the data as soon as Mapper finishes it task and it will not wait till last Map Task finished
1. True
2. False


Question : If a Mapper runs slow relative to other than ?


  : If a Mapper runs slow relative to other than ?
1. No reducer can start until last Mapper finished
2. If mapper is running slow then another instance of Mapper will be started by Hadoop on another machine
3. Access Mostly Uused Products by 50000+ Subscribers
4. The result of the first mapper finished will be used
5. All of the above


Question : What is the Combiner ?

  : What is the Combiner ?
1. Runs locally on a single Mappers output
2. Using Combiner can reduce the network traffic
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the 1,2 and 3
5. All 1,2 and 3 applicable to the Combiner