Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : In the label-based scheduling


 : In the label-based scheduling
1. User can override the default scheduling algorithm and can have more control where the Job should run the Cluster

2. Location of the labels file can be defined using jobtracker.node.labels.file in mapred-site.xml file

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3


Correct Answer : Get Lastest Questions and Answer :
Explanation: It is the responsibility of scheduler only to match the submitted label against the available node labels and run the tasks for the Job on the
nodes that match the node labels.





Question : Select correct statement regarding label based scheduling


 : Select correct statement regarding label based scheduling
1. To list all the available labels in the cluster, you can use hadoop job -showlabels

2. We can use following command line option to Submit job with label hadoop jar -D mapred.job.label=hadoopexam

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3


Correct Answer : Get Lastest Questions and Answer :
Explanation: If you dont provide any lablel to the job. It will not fail and use default scheduling algorithm.



Question : You have following command executed

hadoop job -showlables
Node lables :
CentOS001 : [heavy, hig_ram, high_cpu]
CentOS002 : [light, low_ram, low_cpu]
CentOS003 : [medium, m_ram, m_cpu]

Ans now you submit the job with below command

hadoop jar -D mapred.job.label=hadoopexam

What would happen?
 : You have following command executed
1. It will submit the entire job on CentOS001

2. It will submit the entire job on CentOS002

3. Access Mostly Uused Products by 50000+ Subscribers

4. It will use default scheduling algorithm

5. Job will hang


Correct Answer : Get Lastest Questions and Answer :
Explanation: If we submit the job with the label, which does not exists job will hang and will never be executed. If you don t provide label at
all then default scheduling algorithm will be used.




Related Questions


Question : How data node sends their block report to NameNode or JobTracker

 : How data node sends their block report to NameNode or JobTracker
1. Data node sends HeartBeat information only once, when data is stored on HDFS

2. Data node sends HeartBeat information once in a day

3. Access Mostly Uused Products by 50000+ Subscribers

4. All 1 and 3



Question : Which of the following Scheduler you can configure in Hadoop?
A. Fair Scheduler
B. Capacity Scheduler
C. Weight Scheduler
D. Timing Scheduler

 : Which of the following Scheduler you can configure in Hadoop?
1. A,B
2. B,C
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,D
5. B,D


Question : Which is true about Fair Scheduler in Hadoop?

 : Which is true about Fair Scheduler in Hadoop?
1. It is a default scheduler in Hadoop

2. Each user has its own pool

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,3

5. 1,2,3



Question : Which is a true statement regarding Capacity Scheduler?
 : Which is a true statement regarding Capacity Scheduler?
1. Queues can be configured with their weighted access

2. Hierarchical Queue can be configured

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,3
5. 1,2,3


Question : Select correct statement regarding Fair Scheduler


 : Select correct statement regarding Fair Scheduler
1. Fair scheduler gives each user equal share. By default 1 pool is per user.

2. When a slot is free, most starved job will get free slots

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3


Question : In capacity scheduler Jobs are submitted to Queues. What order is maintained inside the Queue (Default)
 : In capacity scheduler Jobs are submitted to Queues. What order is maintained inside the Queue (Default)
1. LIFO

2. FIFO

3. Access Mostly Uused Products by 50000+ Subscribers

4.