Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : Map the following scheduler

1. Capacity Scheduler
2. Fair Scheduler

A. Pool
B. Queue
C. Support Pre-emption


  : Map the following scheduler
1. 1-A, 1-B, 2-C

2. 1-B, 2-A, 2-C

3. 1-B, 2-B, 1-C

4. 1-A, 1-B, 1-C

Correct Answer : 2
Explanation:




Question : Map the followings

1. Resource Manager
2. Node Manager
3. Application Master


A. Creates and Deletes the container
B. Launches the Apps
C. Request containers for the Apps

  : Map the followings
1. 1-A, 2-B, 3-C
2. 1-C, 2-B, 3-A
3. 1-B. 2-C. 3-A
4. 1-A, 2-C, 3-B

Correct Answer : 1
Explanation:




Question : You are upgrading Hadoop Installation to use YARN, why and which are the correct features of YARN ?

1. YARN is similar to JobTracker and support multiple instances of JobTracker on per cluster.
2. YARN support both MapReduce and Non-MapReduce framework
3. You can also do the slot configuration in YARN
4. YARN support scheduler


  : You are upgrading Hadoop Installation to use YARN, why and which are the correct features  of YARN ?
1. 1,2,4

2. 1,2,3,4

3. 1,3,4

4. 1,4


Correct Answer : 1
Explanation:


Related Questions


Question : Using Hadoop mapreduce framework, you have to use org.apache.hadoop.mapred.lib.IdentityMapper java class as a Mapper and /bin/wc as a reducer.
Select the correct option from below command.

 : Using Hadoop mapreduce framework, you have to use org.apache.hadoop.mapred.lib.IdentityMapper java class as a Mapper and /bin/wc as a reducer.
1. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapR org.apache.hadoop.mapred.lib.IdentityMapper -reducer /bin/wc


2. $HADOOP_HOME/bin/hadoop \
-input myInputDirs \
-output myOutputDir \
-mapper org.apache.hadoop.mapred.lib.IdentityMapper \
-reducer /bin/wc


3. Access Mostly Uused Products by 50000+ Subscribers
-input myInputDirs \
-output myOutputDir \
-map org.apache.hadoop.mapred.lib.IdentityMapper \
-red /bin/wc


4. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper org.apache.hadoop.mapred.lib.IdentityMapper \
-reducer /bin/wc


Question : By default, streaming tasks exiting with non-zero status are considered to be _________ tasks.

 :  By default, streaming tasks exiting with non-zero status are considered to be _________ tasks.
1. Failure

2. Success

3. Access Mostly Uused Products by 50000+ Subscribers



Question : You have written your Python code as a Mapper for MapReduce job in a file called "myPythonScript.py". To run MapReduce job, You have to transfer this Python file on
each node of the cluster, before starting job.
 : You have written your Python code as a Mapper for MapReduce job in a file called
1. True
2. False


Question : You have written your Python code as a Mapper for MapReduce job in a file called "myPythonScript.py" and /bin/wc as a reducer.
Select the correct option which will run MapReduce job.

 : You have written your Python code as a Mapper for MapReduce job in a file called
1. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc


2. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myPythonScript.py


3. Access Mostly Uused Products by 50000+ Subscribers
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-source myPythonScript.py


4. Any of the above


Question : You have written your Python code as a Mapper for MapReduce job in a file called "myPythonScript.py" and /bin/wc as a reducer.
Your mapper also uses a lookup data , stored in myDictionary.txt file
Select correct option which will run MapReduce job.
 : You have written your Python code as a Mapper for MapReduce job in a file called
1. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myDictionary.txt

2. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myPythonScript.py
-file myDictionary.txt
3. Access Mostly Uused Products by 50000+ Subscribers
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-source myPythonScript.py
-file myDictionary.txt
4. Any of the above


Question : Which of the following is/are valid option for Streaming Job Submission command

A. -inputformat JavaClassName
B. -outputformat JavaClassName
C. -partitioner JavaClassName
D. -combiner streamingCommand or JavaClassName
 : Which of the following is/are valid option for Streaming Job Submission command
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,D
5. A,B,C,D