Premium

Cloudera Hadoop Administrator Certification Certification Questions and Answer (Dumps and Practice Questions)



Question : Which of the following is correct command to stop the all Hadoop Services across your entire cluster.
  : Which of the following is correct command to stop the all Hadoop Services across your entire cluster.
1. for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
2. for x in `cd /etc/init.d ; ls mapred-*` ; do sudo service $x stop ; done
3. for x in `cd /etc/init.d ; ls NameNode-*` ; do sudo service $x stop ; done
4. for x in `cd /etc/init.d ; ls hdfs-*` ; do sudo service $x stop ; done


Correct Answer : 1


Explanation:






Question : Select the correct statement which applies while taking the back up HDFS metadata on the NameNode machine

  : Select the correct statement which applies while taking the back up HDFS metadata on the NameNode machine
1. Do this step when you are sure that all Hadoop services have been shut down.
2. If there NameNode XML is configured with "dfs.name.dir", with multiple path values as a comma-separated then we should take the back up of all the directories.
3. If you see a file containing the word lock,in the configured directory for NameNode, the NameNode is probably still running.
4. 1 and 3
5. 2 and 3


Correct Answer : 4


Explanation: Back up the HDFS Metadata
To back up the HDFS metadata on the NameNode machine:

Important:
Do this step when you are sure that all Hadoop services have been shut down. It is particularly important that the NameNode service is not running so that you can make a consistent backup.
To back up the HDFS metadata on the NameNode machine:
o Cloudera recommends backing up HDFS metadata on a regular basis, as well as before a major upgrade.
o dfs.name.dir is deprecated but still works; dfs.namenode.name.dir is preferred. This example uses dfs.name.dir.
o Find the location of your dfs.name.dir (or dfs.namenode.name.dir); for example:
$ grep -C1 dfs.name.dir /etc/hadoop/conf/hdfs-site.xml

You should see something like this:
(property)
(name>dfs.name.dir(/name>
(value>/mnt/hadoop/hdfs/name(/value>
o Back up the directory. The path inside the (value> XML element is the path to your HDFS metadata. If you see a comma-separated list of paths, there is no need to back up all of them; they store the same data. Back up the first directory, for example, by using the following commands:
$ cd /mnt/hadoop/hdfs/name
# tar -cvf /root/nn_backup_data.tar .
./
./current/
./current/fsimage
./current/fstime
./current/VERSION
./current/edits
./image/
./image/fsimage
Warning:
If you see a file containing the word lock, the NameNode is probably still running. Repeat the preceding steps, starting by shutting down the Hadoop services.






Question : Please select the correct command to uninstall Hadoop
  : Please select the correct command to uninstall Hadoop
1. On Red Hat-compatible systems: $ sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
2. On SLES systems: $ sudo remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
3. On Ubuntu systems: sudo remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
4. All of above

Correct Answer : 1

Explanation:

To uninstall Hadoop:
Run this command on each host:
On Red Hat-compatible systems:
$ sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
On SLES systems:
$ sudo zypper remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
On Ubuntu systems:
sudo apt-get remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr





Related Questions


Question : Developer has submitted the YARN Job, by calling submitApplication() method on Resource Manager.
Please select the correct order of the below stpes after that

1. Container will be managed by Node Manager after job submission
2. Resource Manager triggers its sub-component Scheduler, which allocates containers for mapreduce job execution.
3. Resource Manager starts Application Master in the container

  : Developer has submitted the YARN Job, by calling submitApplication() method on Resource Manager.
1. 2,3,1
2. 1,2,3
3. 2,1,3
4. 1,3,2


Question : Which of the following are responsbilities of the ApplicationMater

1. Before starting any task, create job's output directory for job's OutputCommitter.
2. Both map tasks and reduce tasks are created by Application Master.
3. If the submitted job is small, then Application Master runs the job in the same JVM on which Application Master is running.
4. If job doesn't qualify as Uber task, Application Master requests containers for all map tasks and reduce tasks.

  : Which of the following are responsbilities of the ApplicationMater
1. 1,2,3
2. 2,3,4
3. 1,3,4
4. 1,2,4
5. 1,2,3,4





Question : Which of the following are the steps followed as part of TaskExecution

1. Once Containers assigned to tasks, Application Master starts containers by notifying its Node Manager.
2. Application Master copies Job resources (like job JAR file) from HDFS distributed cache and runs map or reduce tasks.
3. Node Manager copies Job resources (like job JAR file) from HDFS distributed cache and runs map or reduce tasks.
4. Running Tasks, keep reporting about the progress and status (Including counters) of current
task to Application Master and Application Master collects this progress information from all tasks and
aggregate values are propagated to Client Node or user.

  : Which of the following are the steps followed as part of TaskExecution
1. 1,2,3
2. 2,3,4
3. 3,4,1
4. 1,3,4
5. 1,2,3,4





Question : You have to use YARN frmaeowrk for Hadoop Cluster, for that you have to configure "mapreduce.framework.name"
in mapred-site.xml, which is the correct value for this property

  : You have to use YARN frmaeowrk for Hadoop Cluster, for that you have to configure
1. mrv2
2. yarn
3. v2
4. No need to configure cdh5 by default on yarn




Question : Which of the following properties in the yarn-site.xml file, will specifies the URIs of the directories
where the NodeManager stores container log files.


 : Which of the following properties in the yarn-site.xml file, will specifies the URIs of the directories
1. yarn.nodemanager.local-dirs
2. yarn.nodemanager.log-dirs
3. yarn.nodemanager.remote-app-log-dir
4. yarn.nodemanager.dirs




Question : . For YARN: conf/core-site.xml and conf/yarn-site.xml, respectively, have the IP addresses not the hostnames
of the NameNode, the ResourceManager, and the ResourceManager Scheduler

  : . For YARN: conf/core-site.xml and conf/yarn-site.xml, respectively, have the IP addresses not the hostnames
1. True
2. False