Premium
Cloudera Hadoop Developer Certification Questions and Answer (Dumps and Practice Questions)
Question : Select the correct statement for the NameNode ?
1. NameNode daemon must be running at all the times
2. NameNode holds all its metadata in RAM for fast access.
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2 and 3 are correct
5. 1 and 2 are correct
Correct Answer
:
Get Lastest Questions and Answer
:
Question : If NameNode stops, the cluster becomes inaccessible ?
1. True
2. Flase
Correct Answer
:
Get Lastest Questions and Answer
:
Question : Secondary NameNode is a backup for NameNode ?
1. True
2. False
Correct Answer
:
Get Lastest Questions and Answer
:
The Secondary NameNode is not a backup of the NameNode and take care of some housekeeping
tasks for the NameNode
Related Questions
Question :
Select the correct code snippet which implemets the WritableComparable correctly for a pair of Strings
1. 1
2. 2
3. Access Mostly Uused Products by 50000+ Subscribers
Question :
If you have 100 files of size 100MB and block size is of 64MB then how many maps will run?
1. 100
2. 200
3. Access Mostly Uused Products by 50000+ Subscribers
4. Between 100 and 200
Question :
If you want to use a file in distributed cache then in which method should you read it?
1. map
2. run
3. Access Mostly Uused Products by 50000+ Subscribers
4. setup
Question : From the Acmeshell.com website you have your all the data stored
in Oracle database table called MAIN.PROFILES table. In HDFS you already
have your Apache WebServer log file stored called users_activity.log .
Now you want to combine/join both the data users_activity.log file and MAIN.PROFILES
table. Initailly, you want to import the table data from the database
into Hive using Sqoop with the delimeter (;) and column order remain same.
Now select the correct MapReduce code snippet which can produce the csv file,
so that we can load the output of MapReduce job in the HIve table created
in above steps called PROFILE.
1. 1
2. 2
3. Access Mostly Uused Products by 50000+ Subscribers
4. 4
Question : As part of HadoopExam consultency team, you have been given a requirement by a Hotel to create
a GUI apllication, so all the hotel's sales or booking you will add and edit the customer information, and you dont want to spend the
money on enterprize RDBMS, hence decided simple file as a storage and considered the csv file. So HDFS is the better choice for
storing such information in the file.
1. No, because HDFS is optimized for read-once, streaming access for relatively large files.
2. No, because HDFS is optimized for write-once, streaming access for relatively large files.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, because HDFS is optimized for write-once, streaming access for relatively large files.
Question : Please identify the statement which can correctly describe the use of RAM, of the NameNode
1. To store filenames, initial 100 lines from the each stored file in HDFS.
2. To store filenames, and while reading the file work as a buffer.
3. Access Mostly Uused Products by 50000+ Subscribers
4. To store filenames, list of blocks but no metadata.