Question : Which of the following is correct command to stop the all Hadoop Services across your entire cluster. 1. for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done 2. for x in `cd /etc/init.d ; ls mapred-*` ; do sudo service $x stop ; done 3. for x in `cd /etc/init.d ; ls NameNode-*` ; do sudo service $x stop ; done 4. for x in `cd /etc/init.d ; ls hdfs-*` ; do sudo service $x stop ; done
Correct Answer : 1
Explanation:
Question : Select the correct statement which applies while taking the back up HDFS metadata on the NameNode machine
1. Do this step when you are sure that all Hadoop services have been shut down. 2. If there NameNode XML is configured with "dfs.name.dir", with multiple path values as a comma-separated then we should take the back up of all the directories. 3. If you see a file containing the word lock,in the configured directory for NameNode, the NameNode is probably still running. 4. 1 and 3 5. 2 and 3
Correct Answer : 4
Explanation: Back up the HDFS Metadata To back up the HDFS metadata on the NameNode machine:
Important: Do this step when you are sure that all Hadoop services have been shut down. It is particularly important that the NameNode service is not running so that you can make a consistent backup. To back up the HDFS metadata on the NameNode machine: o Cloudera recommends backing up HDFS metadata on a regular basis, as well as before a major upgrade. o dfs.name.dir is deprecated but still works; dfs.namenode.name.dir is preferred. This example uses dfs.name.dir. o Find the location of your dfs.name.dir (or dfs.namenode.name.dir); for example: $ grep -C1 dfs.name.dir /etc/hadoop/conf/hdfs-site.xml
You should see something like this: (property) (name>dfs.name.dir(/name> (value>/mnt/hadoop/hdfs/name(/value> o Back up the directory. The path inside the (value> XML element is the path to your HDFS metadata. If you see a comma-separated list of paths, there is no need to back up all of them; they store the same data. Back up the first directory, for example, by using the following commands: $ cd /mnt/hadoop/hdfs/name # tar -cvf /root/nn_backup_data.tar . ./ ./current/ ./current/fsimage ./current/fstime ./current/VERSION ./current/edits ./image/ ./image/fsimage Warning: If you see a file containing the word lock, the NameNode is probably still running. Repeat the preceding steps, starting by shutting down the Hadoop services.
Question : Please select the correct command to uninstall Hadoop 1. On Red Hat-compatible systems: $ sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr 2. On SLES systems: $ sudo remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr 3. On Ubuntu systems: sudo remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr 4. All of above
Correct Answer : 1
Explanation:
To uninstall Hadoop: Run this command on each host: On Red Hat-compatible systems: $ sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr On SLES systems: $ sudo zypper remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr On Ubuntu systems: sudo apt-get remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr