Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : The Mapper may use or completely ignore the input key ?


  : The Mapper may use or completely ignore the input key ?
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :






Question : What would be the key when file is an input to the MapReduce job

  : What would be the key when file is an input to the MapReduce job
1. The key is the byte offset into the file at which the line starts
2. the key is the line contents itself
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above



Correct Answer : Get Lastest Questions and Answer :






Question :The Mappers output must be in the form of key value pairs
  :The Mappers output must be in the form of key value pairs
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :



Related Questions


Question : You write MapReduce job to process files in HDFS. Your MapReduce algorithm uses TextInputFormat: the mapper applies a regular expression over input values and emits
key values pairs with the key consisting of the matching text, and the value containing the filename and byte offset. Determine the difference between setting the number of reduces
to one and settings the number of reducers to zero.
 : You write MapReduce job to process  files in HDFS. Your MapReduce algorithm uses TextInputFormat: the mapper applies a regular expression over input values and emits
1. There is no difference in output between the two settings.

2. With zero reducers, no reducer runs and the job throws an exception. With one reducer, instances of matching patterns are stored in a single file on HDFS.

3. With zero reducers, all instances of matching patterns are gathered together in one file on HDFS. With one reducer, instances of matching patterns are stored in multiple files on HDFS.

4. With zero reducers, instances of matching patterns are stored in multiple files on HDFS. With one reducer, all instances of matching patterns are gathered together in one file on HDFS.



Question : In a MapReduce job, you want each of your input files processed by a single map task. How do you configure a MapReduce job so that a single map
task processes
each input file regardless of how many blocks the input file occupies?

 : In a MapReduce job, you want each of your input files processed by a single map task. How do you configure a MapReduce job so that a single map
1. Increase the parameter that controls minimum split size in the job configuration.

2. Write a custom MapRunner that iterates over all key-value pairs in the entire file.

3. Set the number of mappers equal to the number of input files you want to process.

4. Write a custom FileInputFormat and override the method isSplitable to always return false.


Question : What is the term for the process of moving map outputs to the reducers?

 : What is the term for the process of moving map outputs to the reducers?
1. Reducing

2. Combining

3. Partitioning

4. Shuffling and sorting