Question : You have to use YARN frmaeowrk for Hadoop Cluster, for that you have to configure "mapreduce.framework.name" in mapred-site.xml, which is the correct value for this property
1. mrv2 2. yarn 3. v2 4. No need to configure cdh5 by default on yarn
Correct Answer : 2
Explanation: mapreduce.framework.name mapred-site.xml If you plan on running YARN, you must set this property to the value of yarn. Sample Configuration: mapred-site.xml: (property) (name>mapreduce.framework.name(/nam)> (value>yarn(/value) (/property)
Question : Which of the following properties in the yarn-site.xml file, will specifies the URIs of the directories where the NodeManager stores container log files.
Explanation: yarn.nodemanager.local-dirs Specifies the URIs of the directories where the NodeManager stores its localized files. All of the files required for running a particular YARN application will be put here for the duration of the application run. Cloudera recommends that this property specify a directory on each of the JBOD mount points; for example, file:///data/1/yarn/local through /data/N/yarn/local. yarn.nodemanager.log-dirs Specifies the URIs of the directories where the NodeManager stores container log files. Cloudera recommends that this property specify a directory on each of the JBOD mount points; for example, file:///data/1/yarn/logs through file:///data/N/yarn/logs. yarn.nodemanager.remote-app-log-dir Specifies the URI of the directory where logs are aggregated. Set the value to hdfs://(namenode-host.company.com):8020/var/log/hadoop-yarn/apps,using the fully-qualified domain name of your NameNode host.
Question : . For YARN: conf/core-site.xml and conf/yarn-site.xml, respectively, have the IP addresses not the hostnames of the NameNode, the ResourceManager, and the ResourceManager Scheduler
1. True 2. False
Correct Answer : 2
Explanation: For YARN: make sure conf/core-site.xml and conf/yarn-site.xml, respectively, have the hostnames not the IP addresses of the NameNode, the ResourceManager, and the ResourceManager Scheduler
1. 6 GB reserved for system memory + (if HBase) 8 GB for HBase 2. 4 GB reserved for system memory + (if HBase) 8 GB for HBase 3. 2 GB reserved for system memory + (if HBase) 8 GB for HBase 4. 12 GB reserved for system memory + (if HBase) 8 GB for HBase
1. The physical RAM limit for each Map and Reduce task 2. The JVM heap size limit for each task. 3. The amount of virtual memory each task will receive. 4. 1 and 3 5. All 1,2 and 3
1. Two active NameNodes and two Standby NameNodes 2. One active NameNode and one Standby NameNode 3. Two active NameNodes and on Standby NameNode 4. Unlimited. HDFS High Availability (HA) is designed to overcome limitations on the number of NameNodes you can deploy