Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : Will settings using Java API overwrite values in configuration files?

  :  Will settings using Java API overwrite values in configuration files?
1. No. The configuration settings in the configuration file takes precedence
2. Yes. The configuration settings using Java API take precedence
3. Access Mostly Uused Products by 50000+ Subscribers
4. Only global configuration settings are captured in configuration files on namenode.
There are only a very few job parameters that can be set using Java API



Correct Answer : Get Lastest Questions and Answer :


Explanation: Developer has full control over the setting on Hadoop cluster. All configurations can be changed via Java API





Question : What is distributed cache?
  : What is distributed cache?
1. The distributed cache is special component on namenode that will cache frequently used data for faster client response.
It is used during reduce step
2. The distributed cache is special component on datanode that will cache frequently used data
for faster client response. It is used during map step
3. Access Mostly Uused Products by 50000+ Subscribers
4. The distributed cache is a component that allows developers to deploy jars for Map-Reduce processing.



Correct Answer : Get Lastest Questions and Answer :


Explanation: Distributed cache is the Hadoop answer to the problem of deploying third-party libraries. Distributed cache will allow libraries to be deployed to all datanodes.





Question : What is writable?

  : What is writable?
1. Writable is a java interface that needs to be implemented for streaming data to remote servers.
2. Writable is a java interface that needs to be implemented for HDFS writes.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of these answers are corrects




Correct Answer : Get Lastest Questions and Answer :

Explanation: Hadoop performs a lot of data transmissions between different datanodes. Writable is needed for mapreduce processing in order to improve performance of the data
transmissions. The Writable interface makes serialization quick and easy for Hadoop.







Related Questions


Question : Which of the following you can use in MRv to monitor jobs


 : Which of the following you can use in MRv to monitor jobs
1. Job Tracker or Task Tracker Web Uis

2. you can use the metrics database (available through the MCS to monitor jobs

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2
5. 1,2,3



Question : In MCS, What all information you can track about a Job


 : In MCS, What all information you can track about a Job
1. The time the job started executing

2. Percentage of map tasks executed

3. Access Mostly Uused Products by 50000+ Subscribers

4. 2,3

5. 1,2,3


Question : Which all of the problems can be solved using MapReduce?


 : Which all of the problems can be solved using MapReduce?
1. Summarizing data

2. Filtering Data

3. Access Mostly Uused Products by 50000+ Subscribers

4. 2,3

5. 1,2,3



Question : What of the following can be done using MapReduce?

 : What of the following can be done using MapReduce?
1. We can count and index group of data

2. Filter the data and fetching top n records

3. Access Mostly Uused Products by 50000+ Subscribers

4. 2,3
5. 1,2,3



Question : Select the correct order of the MapReduce program flow
A. Data Fed from input file to Mapper
C. shuffling of data
B. transformation of data
D. data written to the file

 : Select the correct order of the MapReduce program flow
1. D,B,C,A
2. C,D,B,A
3. Access Mostly Uused Products by 50000+ Subscribers
4. B,A,C,D
5. A,D,B,C


Question : You have a ma () method in a Mapper class as below.
public void map(LongWritable key, Text value, Context context){}
What is the use of context object here?
A. It keeps Job configuration information
B. It gives currently running mapper to input split
C. It only contains the next record pointer to input split
D. A,B
E. A,B,C

 : You have a ma () method in a Mapper class as below.
1. It keeps Job configuration information

2. It gives currently running mapper to input split

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3