Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : If the output key of the Mapper is Text than Input key of the reducer must be Text
 : If the output key of the Mapper is Text than Input key of the reducer must be Text
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :
Explanation: When only a map and reducer class are defined for a job, the key/value pairs emitted by the mapper are consumed by the by the
reducer. So, the output types for the mapper should be the same as the reducer.

(input) -> map -> -> reduce -> (output)

When a combiner class is defined for a job, the intermediate key value pairs are combined on the same node as the map task before sending to the reducer.
Combiner reduces the network traffic between the mappers and the reducers.

Note that the combiner functionality is same as the reducer (to combine keys), but the combiner input/output key/value types should be of the same type,
while for the reducer this is not a requirement.

(input) -> map -> -> combine* -> -> reduce -> (output)

In the scenario where the reducer class is also defined as a combiner class, the combiner/reducer input/ouput key/value types should be of the same type
(k2/v2) as below. If not, due to type erasure the program compiles properly but gives a run time error.

(input) -> map -> -> combine* -> -> reduce -> (output)




Question : Which of is the correct way, by which hadoop job can be launched?


 : Which of is the correct way, by which hadoop job can be launched?
1. synchronously

2. asynchronously

3. Access Mostly Uused Products by 50000+ Subscribers

4. None of 1 and 2


Correct Answer : Get Lastest Questions and Answer :
Explanation: There are 2 ways to launch the job " synchronously and asynchronously. The job.waitForCompletion() launches the job synchronously.
The driver code will block wai3ng for the job to complete at this line.
The true argument informs the framework to write verbose output to the controlling terminal of the job.







Question : When you write a Java MapReduce application, which method will be the entry point for application
 : When you write a Java MapReduce application, which method will be the entry point for application
1. main()

2. ToolRunner.run()

3. Access Mostly Uused Products by 50000+ Subscribers

4. reduce()

5. job.waitForCompletion()

Correct Answer : Get Lastest Questions and Answer :
Explanation: Hadoop jos is a Java application hence, it will start with main() method only.


Related Questions


Question : The key output of the Mapper must be identical to reducer input key.
  : The key output of the Mapper must be identical to reducer input key.
1. True
2. False




Question : One key is processed by one reducer ?
  : One key is processed by one reducer ?
1. True
2. False




Question : Number of the Mapper configuration is defined in JobConf object ?

  :  Number of the Mapper configuration is defined in JobConf object ?
1. True
2. False




Question :Number of reducer is defined by the user ?

  :Number of reducer is defined by the user ?
1. True
2. False


Question : Developer has submitted the YARN Job, by calling submitApplication() method on Resource Manager.
Please select the correct order of the below steps after that

1. Container will be managed by Node Manager after job submission
2. Resource Manager triggers its sub-component Scheduler, which allocates containers for mapreduce job execution.
3. Access Mostly Uused Products by 50000+ Subscribers

  : Developer has submitted the YARN Job, by calling submitApplication() method on Resource Manager.
1. 2,3,1
2. 1,2,3
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,3,2


Question : Which statement is correct for below code snippet

public class TokenCounterMapper
extends Mapper [Object, Text, Text, IntWritable>{

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
}
context.write(word, one);

}
}


  : Which statement is correct for below code snippet
1. All key value pair will be written to context
2. Some key value pair will be written to context
3. Access Mostly Uused Products by 50000+ Subscribers
4. No key value pair will be written to context