Premium

Cloudera Hadoop Developer Certification Questions and Answer (Dumps and Practice Questions)



Question :

Select the correct code snippet which will produce the 12 files each for a month, considering you have defined 12 reducres for this job

Sample input data
10.1.255.266,hadoopexam.com,index.html,20/Aug/2013
10.1.255.2,hadoopexam.com,index.html,11/Feb/2013
10.1.255.233,hadoopexam.com,index.html,14/Jan/2013

 :
1. 1
2. 2
3. Access Mostly Uused Products by 50000+ Subscribers

Correct Answer : Get Lastest Questions and Answer :

Explanation: MyPartitioner is the class based on which it decides in which reducer the data should go. Hence there are 12 reducers based on each months
it will send the output to the reducer and create a corresponding files.






Question :

From the below given code snippet please select the correct one which is able to create Compressed Sequence file.

 :
1. 1
2. 2
3. Access Mostly Uused Products by 50000+ Subscribers

Correct Answer : Get Lastest Questions and Answer :

Explanation: Correct code snippet uses the OutputFile format as a SequenceFileOutputFormat and for compression it uses the SnappyCodec
And this is the Map only job,

There is no need to call setInputFormatClass, because the input
file is a text file. However, the output file is a SequenceFile.
Therefore, we must call setOutputFormatClass. Snappy compression as well as block level compression.





Question :

Select the correct code snippet which is able to read the compressed sequence file

 :
1. 1
2. 2
3. Access Mostly Uused Products by 50000+ Subscribers


Correct Answer : Get Lastest Questions and Answer :

Explanation: We are using a SequenceFile as the input file. Therefore, we must call setInputFormatClass.
There is no need to call setOutputFormatClass, because the application uses a text file on output.
There is no need to set compression options for the input file. The compression implementation details are encoded within the
input SequenceFile.
This is a map-only job, so we do not call setReducerClass, and we set the number of reducers to 0.



Related Questions


Question : If a file which is MB how much space block space it will used ?
  : If a file which is MB how much space block space it will used ?
1. 33 MB
2. 64 MB
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the Above




Question : How blocks are stored in HDFS
  :  How blocks are stored in HDFS
1. As a binary file
2. As a decoded file
3. Access Mostly Uused Products by 50000+ Subscribers
4. Stored as archived



Question : Without the metadata on the NameNode file can be recovered ?



 : Without the metadata on the NameNode file can be recovered ?
1. True
2. False


Question : Select the correct option ?
  : Select the correct option ?
1. NameNode is the bottleneck for reading the file in HDFS
2. NameNode is used to determine the all the blocks of a file
3. Access Mostly Uused Products by 50000+ Subscribers
4. All of the above


Question :

Which is the correct option for accessing the file which is stored in HDFS

 :
1. Application can read and write files in HDFS using JAVA API
2. There is a commnad line option to access the files
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2 and 3 are correct
5. 1 and 2 are correct


Question : Which is the correct command to copy files from local to HDFS file systems
 : Which is the correct command to copy files from local to HDFS file systems
1. hadoop fs -copy pappu.txt pappu.txt
2. hadoop fs -copyFromPath pappu.txt pappu.txt
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of the above