Premium

Cloudera Hadoop Developer Certification Questions and Answer (Dumps and Practice Questions)



Question : What is writable?

 : What is writable?
1. Writable is a java interface that needs to be implemented for streaming data to remote servers.
2. Writable is a java interface that needs to be implemented for HDFS writes.
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of these answers are corrects




Correct Answer : Get Lastest Questions and Answer :

Explanation: Hadoop performs a lot of data transmissions between different datanodes. Writable is needed for mapreduce processing in order to improve performance of the data transmissions. The Writable interface makes serialization quick and easy for Hadoop.









Question :

The Hadoop API uses basic Java types such as LongWritable, Text, IntWritable. They have almost the same features as default java classes. What are these writable data types optimized for?


 :
1. Writable data types are specifically optimized for network transmissions
2. Writable data types are specifically optimized for file system storage
3. Access Mostly Uused Products by 50000+ Subscribers
4. Writable data types are specifically optimized for data retrieval



Correct Answer : Get Lastest Questions and Answer :

Explanation: Data needs to be represented in a format optimized for network transmission. Hadoop is based on the ability to send data between datanodes very quickly. Writable data types are used for this purpose.





Question :

Can a custom type for data Map-Reduce processing be implemented?

 :
1. No, Hadoop does not provide techniques for custom datatypes
2. Yes, but only for mappers
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but only for reducers



Correct Answer : Get Lastest Questions and Answer :
Developers can easily implement new data types for any objects. It is common practice to use existing classes and extend them with writable interface.


Related Questions


Question : TaskTracker can not start multiple task in the same node

 :  TaskTracker can not start multiple task in the same node
1. True
2. False


Question : TaskTracker runs all the MapTask in the same JVM, if machine has enough processing power and Memory

 : TaskTracker runs all the MapTask in the same JVM, if machine has enough processing power and Memory
1. True
2. False


Question : Select the correct statement


 : Select the correct statement
1. While job is running the intermediate data is keep deleted
2. Reducers write their final output to HDFS
3. Intermediate data is never deleted, HDFS stores them for History Tracking
4. All 1,2 and 3 are correct
5. None of the above



Question : The Intermediate data is held on the TaskTrackers local disk ?
 : The Intermediate data is held on the TaskTrackers local disk ?
1. True
2. False



Question : Which hadoop project gives SQL like interface to access data which is stored in HDFS
 :  Which hadoop project gives SQL like interface to access data which is stored in HDFS
1. Flume
2. Hive
3. Pig
4. 2 and 3


Question : Which of the following project provides the dataflow for tranforming large datasets

 :  Which of the following project provides the dataflow for tranforming large datasets
1. Hive
2. Pig
3. Flume
4. 2 and 3 both