Premium

Cloudera Hadoop Administrator Certification Certification Questions and Answer (Dumps and Practice Questions)



Question : Which of the following responsibilities of JobTracker were splited in YARN architecture?

  : 	Which of the following responsibilities of JobTracker were splited in YARN architecture?
1. Resource Management
2. Job Monitoring
3. Data Distribution
4. 1 and 2
5. All 1, 2 and 3




Correct Answer 4 :


Explanation: YARN: Architecture is to split up the two primary responsibilities of the JobTracker.
1. Resource management
2. job scheduling/monitoring
Into separate daemons: a global ResourceManager (RM) and per-application ApplicationMasters (AM). The ResourceManager service effectively replaces the functions of the JobTracker, and NodeManagers run on slave nodes instead of TaskTracker daemons. The per-application ApplicationMaster is, in effect, a framework-specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.








Question : Cloudera distribution of CDH, is designed to work both MRv and YARN on the same nodes, but not the default Hadoop . by Apache Hadoop



  : Cloudera distribution of CDH, is designed to work both MRv and YARN on the same nodes, but not the default Hadoop . by Apache Hadoop
1. True
2. False
3. True, you have to configure CDH5 while setup
4. False, MRv1 is not supported from Hadoop 2.0


Correct Answer 2 :


Explanation:

Cloudera does not support running MRv1 and YARN daemons on the same nodes at the same time.




Question : Which of the following is not supported in the CDH for HA ?


  : Which of the following is not supported in the CDH for HA ?
1. Quorum-based storage is the only supported HDFS HA
2. HA with NFS shared storage
3. Both 1 and 2
4. Both 1 and 2 are supported


Correct Answer : 2


Explanation: If you have configured HDFS HA with NFS shared storage, do not proceed for upgrading to CDH5. This configuration is not supported on CDH 5;
Quorum-based storage is the only supported HDFS HA configuration on CDH 5.

In CDH 5 you can configure high availability both for the NameNode and the JobTracker or Resource Manager.



Related Questions


Question : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
back up the Configurations data and stop the services.
so which of the following is a correct command for putting the Active NameNode into safe mode.

  : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
1. sudo -u hdfs dfsadmin -safemode enter
2. sudo -u hdfs hdfs -safemode enter
3. sudo -u hdfs hdfs dfsadmin
4. sudo -u hdfs hdfs dfsadmin -safemode enter



Question : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
back up the Configurations data and stop the services.
so which of the following is a correct command for a saveNamespace operation.


 :  As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to
1. sudo -u hdfs -saveNamespace
2. sudo -u dfsadmin -saveNamespace
3. sudo -u hdfs hdfs dfsadmin -saveNamespace
4. sudo -u hdfs hdfs dfsadmin Namespace



Question : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to back
up the Configurations data and stop the services.
so what happen when you do "sudo -u hdfs hdfs dfsadmin -saveNamespace " to perform saveNamespace operation,


  : As you are upgrading your Hadoop Cluster from CDH to CDH, and while doing that you have to back
1. This will result in two new fsimage being written out with all new edit log entries.
2. This will result in a backup of last fsimage and all the new operation will be appended in the existing fsimage being written out with no edit log entries.
3. This will result in a new fsimage being written out with no edit log entries.
4. This will result in a backup of last fsimage.



Question : Which of the following is correct command to stop the all Hadoop Services across your entire cluster.
  : Which of the following is correct command to stop the all Hadoop Services across your entire cluster.
1. for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
2. for x in `cd /etc/init.d ; ls mapred-*` ; do sudo service $x stop ; done
3. for x in `cd /etc/init.d ; ls NameNode-*` ; do sudo service $x stop ; done
4. for x in `cd /etc/init.d ; ls hdfs-*` ; do sudo service $x stop ; done



Question : Select the correct statement which applies while taking the back up HDFS metadata on the NameNode machine

  : Select the correct statement which applies while taking the back up HDFS metadata on the NameNode machine
1. Do this step when you are sure that all Hadoop services have been shut down.
2. If there NameNode XML is configured with "dfs.name.dir", with multiple path values as a comma-separated then we should take the back up of all the directories.
3. If you see a file containing the word lock,in the configured directory for NameNode, the NameNode is probably still running.
4. 1 and 3
5. 2 and 3



Question : Please select the correct command to uninstall Hadoop
  : Please select the correct command to uninstall Hadoop
1. On Red Hat-compatible systems: $ sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
2. On SLES systems: $ sudo remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
3. On Ubuntu systems: sudo remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
4. All of above