Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : In MapR, using the ___________ feature means that each task tracker will reserve one or more ________ slots for executing ____________.


 : In MapR, using the ___________ feature means that each task tracker will reserve one or more ________ slots for executing ____________.
1. ExpressLane, ceephemeral Â? , cesmall jobs Â?

2. FastLane, non-ephemeral, big jobs

3. Access Mostly Uused Products by 50000+ Subscribers

4. ExpressLane , 'non-ephemeral' , big jobs


Correct Answer : Get Lastest Questions and Answer :
Explanation: MapR provides an express path (called ExpressLane) that works in conjunction with the Fair Scheduler. ExpressLane is for small
MapReduce jobs to run when all slots are occupied by long tasks. Small jobs are only given this special treatment when the cluster is busy,
and only if they meet the criteria specified in mapred-site.xml





Question : Slot configuration is depend on

 : Slot configuration is depend on
1. CPU

2. RAM

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3

Correct Answer : Get Lastest Questions and Answer :
Explanation:






Question : Which of the following options can be used as valid way to optimize MapReduce algorithm
A. Using reducer as Combiner or having Custom combiner (mini-reducer)
B. Compressing intermediate results of MapTasks
C. Optimizing number of reducer
D. Using Speculative Execution
E. Reusing the JVM
F. Configuring sorting properties
G. Configuring java properties
 : Which of the following options can be used as valid way to optimize MapReduce algorithm
1. C,D,E,F,G
2. A,B,C,D
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,C,D,E,F
5. A,B,C,D,E,F,G

Correct Answer : Get Lastest Questions and Answer :
Explanation:


Related Questions


Question : You have following sample Mapper class and its map() method.

public class ProjectionMapper extends Mapper {
private Text word = new Text();
private LongWritable count = new LongWritable();
@Override
protected void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String[] split = value.toString().split("\t+");
word.set(split[0]);
if (split.length > 2) {
try {
count.set(Long.parseLong(split[2]));
context.write(word, count);
} catch (NumberFormatException e) {
}
}
}
}
Now, select the correct statement based on above code.
 : You have following sample Mapper class and its map() method.
1. Four arguments to the Mapper Class added in angle bracket are for Input key and value as well output key and value

2. Mapper class always use map() method

3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2

5. 1,2,3


Question : Select correct statement regarding map() method


 : Select correct statement regarding map() method
1. each call to map() method will produce List of key value pair for all the records in input split

2. each call to map() method will produce List of key value pair for all the records in a block of a file

3. Access Mostly Uused Products by 50000+ Subscribers

4. Both 1 and 2



Question : We have reduce class example as below.

public class LongSumReducer extends Reducer {

private LongWritable result = new LongWritable();

public void reduce(KEY key, Iterable values,
Context context) throws IOException, InterruptedException {
long sum = 0;
for (LongWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}

Select correct option
 : We have reduce class example as below.
1. reduce methods emits final results as key and value. Both will be saved on hdfs.

2. reduce methods emits final results value and will be saved on hdfs.

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,3

5. 2,3


Question : If the output key of the Mapper is Text than Input key of the reducer must be Text
 : If the output key of the Mapper is Text than Input key of the reducer must be Text
1. True
2. False


Question : Which of is the correct way, by which hadoop job can be launched?


 : Which of is the correct way, by which hadoop job can be launched?
1. synchronously

2. asynchronously

3. Access Mostly Uused Products by 50000+ Subscribers

4. None of 1 and 2



Question : When you write a Java MapReduce application, which method will be the entry point for application
 : When you write a Java MapReduce application, which method will be the entry point for application
1. main()

2. ToolRunner.run()

3. Access Mostly Uused Products by 50000+ Subscribers

4. reduce()

5. job.waitForCompletion()