Premium

Cloudera Hadoop Administrator Certification Certification Questions and Answer (Dumps and Practice Questions)



Question : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
back up the Configurations data and stop the services.
so which of the following is a correct command for putting the Active NameNode into safe mode.

  : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
1. sudo -u hdfs dfsadmin -safemode enter
2. sudo -u hdfs hdfs -safemode enter
3. sudo -u hdfs hdfs dfsadmin
4. sudo -u hdfs hdfs dfsadmin -safemode enter


Correct Answer : 4

Explanation: Put the NameNode into safe mode and save the fsimage:
Put the NameNode (or active NameNode in an HA configuration) into safe mode:
$ sudo -u hdfs hdfs dfsadmin -safemode enter







Question : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
back up the Configurations data and stop the services.
so which of the following is a correct command for a saveNamespace operation.


 :  As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
1. sudo -u hdfs -saveNamespace
2. sudo -u dfsadmin -saveNamespace
3. sudo -u hdfs hdfs dfsadmin -saveNamespace
4. sudo -u hdfs hdfs dfsadmin Namespace


Correct Answer : 3

Explanation: Back Up Configuration Data and Stop Services
1. Put the NameNode into safe mode and save the fsimage:
a. Put the NameNode (or active NameNode in an HA configuration) into safe mode:
$ sudo -u hdfs hdfs dfsadmin -safemode enter
b. Perform a saveNamespace operation:
$ sudo -u hdfs hdfs dfsadmin -saveNamespace





Question : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to back
up the Configurations data and stop the services.
so what happen when you do "sudo -u hdfs hdfs dfsadmin -saveNamespace " to perform saveNamespace operation,


  : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to back
1. This will result in two new fsimage being written out with all new edit log entries.
2. This will result in a backup of last fsimage and all the new operation will be appended in the existing fsimage being written out with no edit log entries.
3. This will result in a new fsimage being written out with no edit log entries.
4. This will result in a backup of last fsimage.


Correct Answer : 3


Explanation: To Back Up Configuration Data and Stop Services
1. Put the NameNode into safe mode and save the fsimage:
a. Put the NameNode (or active NameNode in an HA configuration) into safe mode:
$ sudo -u hdfs hdfs dfsadmin -safemode enter
b. Perform a saveNamespace operation:
$ sudo -u hdfs hdfs dfsadmin -saveNamespace
This will result in a new fsimage being written out with no edit log entries.





Related Questions


Question : While upgrading the CDH with YARN, you have to install or update and start a Zookeeper, why you have to do that?
  : While upgrading the CDH with YARN, you have to install or update and start a Zookeeper, why you have to do that?
1. For High Availability of NameNode
2. For High Availability of JobTracker
3. For High Availability of Resource Manager
4. 1 and 2 only
5. All 1,2 and 3


Question :

Which of the following step needs to be followed for upgradng the HDFS Metadata for an HA deployement

1. Run the following command on the active NameNode only, and make sure the JournalNodes have been upgraded to CDH 5 and are up and running before you run the command. $ sudo service hadoop-hdfs-namenode -upgrade
2. If Kerberos is not enabled: $ sudo -u hdfs hdfs namenode -bootstrapStandby
$ sudo service hadoop-hdfs-namenode start

3. Start up the DataNodes: On each DataNode: $ sudo service hadoop-hdfs-datanode start

4. Wait for NameNode to exit safe mode, and then start the Secondary NameNode.
a. To check that the NameNode has exited safe mode, look for messages in the log file, or the NameNode's web interface, that say "...no longer in safe mode."
b. To start the Secondary NameNode (if used), enter the following command on the Secondary NameNode host: $ sudo service hadoop-hdfs-secondarynamenode start
  :
1. 1,2,3
2. 2,3,4
3. 1,3,4
4. 1,2,4




Question : . You have correctly configured the YARN cluster, now you have to start YARN cluster properly with following steps.

1. On the ResourceManager system:
$ sudo service hadoop-yarn-resourcemanager start

2. On each NodeManager system (typically the same ones where DataNode service runs):
$ sudo service hadoop-yarn-nodemanager start

3. On the MapReduce JobHistory Server system:
$ sudo service hadoop-mapreduce-historyserver start
Select the correct order of the above steps

  : . You have correctly configured the YARN cluster, now you have to start YARN cluster properly with following steps.
1. 1,2,3
2. 2,3,1
3. 3,2,1
4. In any random order you can start above services




Question : Which of the following are the new features of MapReduceV (YARN) architecture
  : Which of the following are the new features of MapReduceV (YARN) architecture
1. ResourceManager High Availability: YARN now allows you to use multiple ResourceManagers so that there is no single point of failure. In-flight jobs are recovered without re-running completed tasks.
2. Monitoring and enforcing memory and CPU-based resource utilization using cgroups.
3. Continuous Scheduling: This feature decouples scheduling from the node heartbeats for improved performance in large clusters
4. 1 and 2
5. 1,2 and 3




Question : When you submitted your MapReduce job on YARN framework, Which of the following component is responsible
for monitoring resource usage e.g. CPU,memory, disk,network on individual nodes.
  : When you submitted your MapReduce job on YARN framework, Which of the following component is responsible
1. Resource Manager
2. Application Master

3. Node Manager

4. NameNode



Question : Map the following in case of YARN

1. YARN Resource Manager
2. YARN Node Managers
3. MapReduce Application Master

a. which launch and monitor the tasks of jobs
b. allocates the cluster resources to jobs
c. which coordinates the tasks running in the MapReduce job

  : Map the following in case of YARN
1. 1-a, 2-b,3-c
2. 1-b, 2-a,3-c
3. 1-c, 2-a,3-b
4. 1-a, 2-c,3-b