Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : What all information as Tasks sends back to TaskTracker?

 : What all information as Tasks sends back to TaskTracker?
1. Task Status

2. Task Counters

3. Access Mostly Uused Products by 50000+ Subscribers

4. Only 1 and 2

5. 1,2,3


Correct Answer : Get Lastest Questions and Answer :
Explanation: : To process the data, Job Tracker assigns certain tasks to the Task Tracker. Let us think that, while the processing is going on
one DataNode in the cluster is down. Now, NameNode
should know that the certain DataNode is down , otherwise it cannot continue processing by using replicas. To make NameNode aware of the status (active /
inactive) of DataNodes, each
DataNode sends a "Heart Beat Signal" for every 10 minutes (Default). This mechanism is called as HEART BEAT MECHANISM
Based on this Heart Beat Signal Job Tracker assigns tasks to the Tasks Trackers which are active. If any task tracker is not able to send the signal in the
span of 10 mins, Job Tracker
treats it as inactive, and checks for the ideal one to assign the task. If there are no ideal Task Trackers, Job Tracker should wait until any Task Tracker
becomes ideal.
When TaskTrackers send HeartBeats back to JobTracker , it also include other information like Status, Counters, and Data Read/Write status.






Question : Select statements, which are correct with respect to Spark Streaming


 : Select statements, which are correct with respect to Spark Streaming
1. You can use other language program in MapReduce

2. It May Increase or decrease the performance of Job

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3


Correct Answer : Get Lastest Questions and Answer :
Explanation:




Question : In Hadoop Streaming

 : In Hadoop Streaming
1. PipeMap is used to run your Mapper written in any other language

2. PipeReduce is used to run your Reducer written in any other language

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3

Correct Answer : Get Lastest Questions and Answer :
Explanation:

Related Questions


Question : The Hadoop API uses basic Java types such as LongWritable, Text, IntWritable. They have almost the same features as default java classes.
What are these writable data types optimized for?


  :  The Hadoop API uses basic Java types such as LongWritable, Text, IntWritable. They have almost the same features as default java classes.
1. Writable data types are specifically optimized for network transmissions
2. Writable data types are specifically optimized for file system storage
3. Access Mostly Uused Products by 50000+ Subscribers
4. Writable data types are specifically optimized for data retrieval




Question : Can a custom type for data Map-Reduce processing be implemented?

  :  Can a custom type for data Map-Reduce processing be implemented?
1. No, Hadoop does not provide techniques for custom datatypes
2. Yes, but only for mappers
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but only for reducers


Question : What happens if mapper output does not match reducer input?

  : What happens if mapper output does not match reducer input?
1. Hadoop API will convert the data to the type that is needed by the reducer.
2. Data input/output inconsistency cannot occur. A preliminary validation check is executed prior
to the full execution of the job to ensure there is consistency.
3. Access Mostly Uused Products by 50000+ Subscribers
4. A real-time exception will be thrown and map-reduce job will fail




Question : Can you provide multiple input paths to a map-reduce jobs?
  : Can you provide multiple input paths to a map-reduce jobs?
1. Yes, but only in Hadoop 0.22+
2. No, Hadoop always operates on one input directory
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but the limit is currently capped at 10 input paths.




Question : Can you assign different mappers to different input paths?

  : Can you assign different mappers to different input paths?
1. Yes, but only if data is identical.
2. Yes, different mappers can be assigned to different directories
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes but only in Hadoop .22+





Question : Can you suppress reducer output?

  :  Can you suppress reducer output?
1. Yes, there is a special data type that will suppress job output
2. No, map reduce job will always generate output.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but only during map execution when reducers have been set to zero