Question : The Hadoop API uses basic Java types such as LongWritable, Text, IntWritable. They have almost the same features as default java classes. What are these writable data types optimized for?
1. Writable data types are specifically optimized for network transmissions 2. Writable data types are specifically optimized for file system storage 3. Access Mostly Uused Products by 50000+ Subscribers 4. Writable data types are specifically optimized for data retrieval
Explanation: Data needs to be represented in a format optimized for network transmission. Hadoop is based on the ability to send data between datanodes very quickly. Writable data types are used for this purpose.
Question : Can a custom type for data Map-Reduce processing be implemented?
Correct Answer : Get Lastest Questions and Answer : Developers can easily implement new data types for any objects. It is common practice to use existing classes and extend them with writable interface.
Question : What happens if mapper output does not match reducer input?
1. Hadoop API will convert the data to the type that is needed by the reducer. 2. Data input/output inconsistency cannot occur. A preliminary validation check is executed prior to the full execution of the job to ensure there is consistency. 3. Access Mostly Uused Products by 50000+ Subscribers 4. A real-time exception will be thrown and map-reduce job will fail
Explanation: Reducers are based on the mappers output and Java is a strongly typed language. Therefore, an exception will be thrown at run-time if types do not much.