Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : The Hadoop API uses basic Java types such as LongWritable, Text, IntWritable. They have almost the same features as default java classes.
What are these writable data types optimized for?


  :  The Hadoop API uses basic Java types such as LongWritable, Text, IntWritable. They have almost the same features as default java classes.
1. Writable data types are specifically optimized for network transmissions
2. Writable data types are specifically optimized for file system storage
3. Access Mostly Uused Products by 50000+ Subscribers
4. Writable data types are specifically optimized for data retrieval



Correct Answer : Get Lastest Questions and Answer :

Explanation: Data needs to be represented in a format optimized for network transmission. Hadoop is based on the ability to send data between datanodes very quickly. Writable data types
are used for this purpose.





Question : Can a custom type for data Map-Reduce processing be implemented?

  :  Can a custom type for data Map-Reduce processing be implemented?
1. No, Hadoop does not provide techniques for custom datatypes
2. Yes, but only for mappers
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but only for reducers

Correct Answer : Get Lastest Questions and Answer :
Developers can easily implement new data types for any objects. It is common practice to use existing classes and extend them with writable interface.




Question : What happens if mapper output does not match reducer input?

  : What happens if mapper output does not match reducer input?
1. Hadoop API will convert the data to the type that is needed by the reducer.
2. Data input/output inconsistency cannot occur. A preliminary validation check is executed prior
to the full execution of the job to ensure there is consistency.
3. Access Mostly Uused Products by 50000+ Subscribers
4. A real-time exception will be thrown and map-reduce job will fail



Correct Answer : Get Lastest Questions and Answer :


Explanation: Reducers are based on the mappers output and Java is a strongly typed language. Therefore, an exception will be thrown at run-time if types do not much.


Related Questions


Question : You have following sample Mapper class and its map() method.

public class ProjectionMapper extends Mapper {
private Text word = new Text();
private LongWritable count = new LongWritable();
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String[] split = value.toString().split("\t+");
word.set(split[0]);
if (split.length > 2) {
try {
count.set(Long.parseLong(split[2]));
context.write(word, count);
} catch (NumberFormatException e) {
}
}
}
}
Now, select the correct statement based on above code.
 : You have following sample Mapper class and its map() method.
1. Four arguments to the Mapper Class added in angle bracket are for Input key and value as well output key and value

2. Mapper class always use map() method

3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2

5. 1,2,3


Question : Select correct statement regarding map() method


 : Select correct statement regarding map() method
1. each call to map() method will produce List of key value pair for all the records in input split

2. each call to map() method will produce List of key value pair for all the records in a block of a file

3. Access Mostly Uused Products by 50000+ Subscribers

4. Both 1 and 2



Question : We have reduce class example as below.

public class LongSumReducer extends Reducer {

private LongWritable result = new LongWritable();

public void reduce(KEY key, Iterable values,
Context context) throws IOException, InterruptedException {
long sum = 0;
for (LongWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

Select correct option
 : We have reduce class example as below.
1. reduce methods emits final results as key and value. Both will be saved on hdfs.

2. reduce methods emits final results value and will be saved on hdfs.

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,3

5. 2,3


Question : If the output key of the Mapper is Text than Input key of the reducer must be Text
 : If the output key of the Mapper is Text than Input key of the reducer must be Text
1. True
2. False


Question : Which of is the correct way, by which hadoop job can be launched?


 : Which of is the correct way, by which hadoop job can be launched?
1. synchronously

2. asynchronously

3. Access Mostly Uused Products by 50000+ Subscribers

4. None of 1 and 2



Question : When you write a Java MapReduce application, which method will be the entry point for application
 : When you write a Java MapReduce application, which method will be the entry point for application
1. main()

2. ToolRunner.run()

3. Access Mostly Uused Products by 50000+ Subscribers

4. reduce()

5. job.waitForCompletion()