Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : You have written your Python code as a Mapper for MapReduce job in a file called "myPythonScript.py" and /bin/wc as a reducer.
Select the correct option which will run MapReduce job.

 : You have written your Python code as a Mapper for MapReduce job in a file called
1. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc


2. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myPythonScript.py


3. Access Mostly Uused Products by 50000+ Subscribers
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-source myPythonScript.py


4. Any of the above

Correct Answer : Get Lastest Questions and Answer :
Explanation: You can specify any executable as the mapper and/or the reducer. The executables do not need to pre-exist on the machines in the cluster; however, if they
don't, you will need to use "-file" option to tell the framework to pack your executable files as a part of job submission. For example:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myPythonScript.py
The above example specifies a user defined Python executable as the mapper. The option "-file myPythonScript.py" causes the python executable shipped to the
cluster machines as a part of job submission.




Question : You have written your Python code as a Mapper for MapReduce job in a file called "myPythonScript.py" and /bin/wc as a reducer.
Your mapper also uses a lookup data , stored in myDictionary.txt file
Select correct option which will run MapReduce job.
 : You have written your Python code as a Mapper for MapReduce job in a file called
1. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myDictionary.txt

2. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myPythonScript.py
-file myDictionary.txt
3. Access Mostly Uused Products by 50000+ Subscribers
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-source myPythonScript.py
-file myDictionary.txt
4. Any of the above

Correct Answer : Get Lastest Questions and Answer :
Explanation: You can specify any executable as the mapper and/or the reducer. The executables do not need to pre-exist on the machines in the cluster; however, if they
don't, you will need to use "-file" option to tell the framework to pack your executable files as a part of job submission. For example:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myPythonScript.py
The above example specifies a user defined Python executable as the mapper. The option "-file myPythonScript.py" causes the python executable shipped to the
cluster machines as a part of job submission.

In addition to executable files, you can also package other auxiliary files (such as dictionaries, configuration files, etc) that may be used by the mapper and/or the reducer. For
example:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myPythonScript.py \
-reducer /bin/wc \
-file myPythonScript.py \
-file myDictionary.txt




Question : Which of the following is/are valid option for Streaming Job Submission command

A. -inputformat JavaClassName
B. -outputformat JavaClassName
C. -partitioner JavaClassName
D. -combiner streamingCommand or JavaClassName
 : Which of the following is/are valid option for Streaming Job Submission command
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,D
5. A,B,C,D

Correct Answer : Get Lastest Questions and Answer :
Explanation: Just as with a normal Map/Reduce job, you can specify other plugins for a streaming job:

-inputformat JavaClassName
-outputformat JavaClassName
-partitioner JavaClassName
-combiner streamingCommand or JavaClassName
The class you supply for the input format should return key/value pairs of Text class. If you do not specify an input format class, the TextInputFormat is used as the default. Since
the TextInputFormat returns keys of LongWritable class, which are actually not part of the input data, the keys will be discarded; only the values will be piped to the streaming
mapper.

The class you supply for the output format is expected to take key/value pairs of Text class. If you do not specify an output format class,
the TextOutputFormat is used as the default.



Related Questions


Question : Which one of the following statements describes the relationship between the ResourceManager and the ApplicationMaster?

  : Which one of the following statements describes the relationship between the ResourceManager and the ApplicationMaster?
1. The ApplicationMaster requests resources from the ResourceManager
2. The ApplicationMaster starts a single instance of the ResourceManager
3. Access Mostly Uused Products by 50000+ Subscribers
4. The ApplicationMaster starts an instance of the ResourceManager within each Container


Question : Which YARN component is responsible for monitoring the success or failure of a Container?
  : Which YARN component is responsible for monitoring the success or failure of a Container?
1. ResourceManager
2. ApplicationMaster
3. Access Mostly Uused Products by 50000+ Subscribers
4. JobTracker


Question : When can a reduce class also serve as a combiner without affecting the output of a MapReduce program?

  : When can a reduce class also serve as a combiner without affecting the output of a MapReduce program?
1. When the types of the reduce operation's input key and input value match the types of
the reducer's output key and output value and when the reduce operation is both
communicative and associative.
2. When the signature of the reduce method matches the signature of the combine method.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Always. The point of a combiner is to serve as a mini-reducer directly after the map phase to increase performance.
5. Never. Combiners and reducers must be implemented separately because they serve different purposes.


Question : What does the following WebHDFS command do?
Curl -1 -L "http://host:port/webhdfs/v1/foo/bar?op=OPEN"

  : What does the following WebHDFS command do?
1. Make a directory /foo/bar
2. Read a file /foo/bar
3. Access Mostly Uused Products by 50000+ Subscribers
4. Delete a directory /foo/bar


Question : Rows from the HBase can directly be inserted as input to Mapreduce job

 :  Rows from the HBase can directly be inserted as input to Mapreduce job
1. True
2. False



Question : Which of the following is/are valid options while submitting MapReduce streaming job.
A. D dfs.data.dir=/tmp
B. -D mapred.local.dir=/tmp/local
C. -D mapred.system.dir=/tmp/system
D. -D mapred.temp.dir=/tmp/temp
 : Which of the following is/are valid options while submitting MapReduce streaming job.
1. A,B,C
2. B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,D
5. A,B,C,D