Question : You have defined Mapper class as below public class HadoopExamMapper extends Mapper{ public void map(XXXXX key, YYYYY value, Context) } What is the correct replacement for XXXXX and YYYYY
Correct Answer : Get Lastest Questions and Answer : Explanation: A. Whatever you have configured as an input key and value type must mathch in the Mapper class B. Input key and value type defined on the Mapper class level must match in map() method arguments C. Output key and value class type must match with the input class of the Mapper class
Question : Which of the following is a correct statement regarding Input key and Value for the Reducer class
1. Both input key and value type of Reducer must match the output key and value type of a defined Mapper class
2. The output key class and output value class in the Reducer must match those defined in the job configuration
Correct Answer : Get Lastest Questions and Answer : Explanation: The input to the mapper depends on what InputFormat is used. The InputFormat is responsible for reading the incoming data and shaping it into whatever format the Mapper expects.The default InputFormat is TextInputFormat, which extends FileInputFormat. If you do not change the InputFormat, using a Mapper with different Key-Value type signature than will cause this error. If you expect input, you will have to choose an appropiate InputFormat. You can set the InputFormat in Job setup: job.setInputFormat(MyInputFormat.class); And like I said, by default this is set to TextInputFormat. Now, let's say your input data is a bunch of newline-separated records delimited by a comma: . "A,value1" . "B,value2" If you want the input key to the mapper to be ("A", "value1"), ("B", "value2") you will have to implement a custom InputFormat and RecordReader with the signature
Here are a few rules regarding input and output keys and values for the Reducer class:
The input key class and input value class in the Reducer must match the output key class and output value class defined in the Mapper class. The output key class and output value class in the Reducer must match those defined in the job configuration. The behavior of the cleanup(), run(), and setup() methods are identical to those described for the Mapper class. Now that you have a basic understanding of the MapReduce API, including framework functionality, the Mapper and Reducer, Mapper input, the record reader, reducer output data processing, and the Mapper, Reducer and Job class API, I suggest that you dive into some additional training.
Question : You have following reducer class defined public class HadoopExamReducer extends Reducer { public void reduce(XXXXX, key, YYYYY value, Context context) .... } What is the correct replacement for XXXXX and YYYYY
Correct Answer : Get Lastest Questions and Answer : Explanation: The input to the mapper depends on what InputFormat is used. The InputFormat is responsible for reading the incoming data and shaping it into whatever format the Mapper expects.The default InputFormat is TextInputFormat, which extends FileInputFormat.
If you do not change the InputFormat, using a Mapper with different Key-Value type signature than will cause this error. If you expect input, you will have to choose an appropiate InputFormat. You can set the InputFormat in Job setup:
job.setInputFormat(MyInputFormat.class); And like I said, by default this is set to TextInputFormat.
Now, let's say your input data is a bunch of newline-separated records delimited by a comma:
"A,value1" "B,value2" If you want the input key to the mapper to be ("A", "value1"), ("B", "value2") you will have to implement a custom InputFormat and RecordReader with the signature
n short, add a class which extends FileInputFormat and a class which extends RecordReader. Override the FileInputFormat#getRecordReader method, and have it return an instance of your custom RecordReader.
Then you will have to implement the required RecordReader logic. The simplest way to do this is to create an instance of LineRecordReader in your custom RecordReader, and delegate all basic responsibilities to this instance. In the getCurrentKey and getCurrentValue-methods you will implement the logic for extracting the comma delimited Text contents by calling LineRecordReader#getCurrentValue and splitting it on comma.