Premium

Cloudera Hadoop Administrator Certification Certification Questions and Answer (Dumps and Practice Questions)



Question : You are having a Hadoop cluster in the Geneva Datacenter with a NameNode on host hadoopexam,
a Secondary NameNode on host hadoopexam2 and 1000 slave node. Everyday you have to create a report thrice a day,
when the last checkpoint happened. Select the best way to find this.

 : You are having a Hadoop cluster in the Geneva Datacenter with a NameNode on host hadoopexam,
1. Connect to the web UI of the Primary NameNode (http://hadoopexam1:50090/) and look at the "Last Checkpoint" information.
2. Execute hdfs dfsadmin -lastreport on the command line
3. Access Mostly Uused Products by 50000+ Subscribers
4. With the command line option hdfs dfsadmin -Checkpointinformation


Correct Answer : Get Lastest Questions and Answer :

Explanation: The key information provided on the Secondary NameNode web interface from an administrative standpoint is the last checkpoint time.
The Secondary NameNode's Web UI contains information on when it last performed its checkpoint operation.
This is not displayed via the NameNode's Web UI, and is not available via the hdfs dfsadmin command. Like the NN web UI, you can also view the SNN's log files. The metrics exposed via JMX can be viewed from the URL http://(secondary_namenode):50090/jmx




Question : Which of the follwing information can be received by connecting the web UI of the Secondary NameNode

1. NameNode Address
2. Last Checkpoint Time
3. Access Mostly Uused Products by 50000+ Subscribers
4. CheckPoint Size
5. CheckPoint Edit Dirs


  : Which of the follwing information can be received by connecting the web UI of the Secondary NameNode
1. 1,2,3,4
2. 2,3,4,5
3. Access Mostly Uused Products by 50000+ Subscribers
4. All 1,2,3,4,5

Correct Answer : Get Lastest Questions and Answer :

Explaination : The key information provided on the Secondary NameNode web interface from an administrative standpoint is the last checkpoint time. Other than that
you can also view the following information.

NameNode Address
Last Checkpoint Time
Checkpoint Period
CheckPoint Size
CheckPoint Edit Dirs





Question : In HadoopExam Inc's Geneva Datacenter you have a Hadoop Clucter with NameNode as HadoopExam and Secondary Namenode as HadoopExam all other
remaining nodes are data nodes. A specific node in your cluster appears to be running slower than other nodes with the same hardware configuration.
You suspect that the system is swapping memory to disk due to over allocation of resources. Which commands may be used to view the memory and swap usage on the system?

1. ps -aef | grep java

2. vmstat

3. Access Mostly Uused Products by 50000+ Subscribers

4. du

5. memswap

6. memoryusage

7. top


 : In HadoopExam Inc's Geneva Datacenter you have a Hadoop Clucter with NameNode as HadoopExam and Secondary Namenode as HadoopExam all other
1. 1,5,6
2. 1,4,7
3. Access Mostly Uused Products by 50000+ Subscribers
4. 5,6,7
5. 1,6,7

Correct Answer : Get Lastest Questions and Answer :

Explanation: There is no command like memswap and memoryusage on the unix and linux operating system. ps -aef | grep java will list all the running java process without memory usage information. " du provides information about disk space but nothing about memory. vmstat : vmstat reports information about processes, memory, paging, block IO,
traps, and cpu activity.

The first report produced gives averages since the last reboot. Addi-
tional reports give information on a sampling period of length delay.
The process and memory reports are instantaneous in either case.
top : top provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes. It can sort the tasks by CPU usage, memory usage and runtime. can be better configured than the standard top from the procps suite. Most features can either be selected by an interactive command or by specifying the feature in the personal or system-wide configuration file. See below for more information. free is a command which can give us valuable information on available RAM in Linux machine. But many new Linux users and admins misinterpret its output. In this post we will walk through its output format and show you actual free RAM.Additionally memory and swap usage can be viewed with cat /proc/meminfo and swap usage can be viewed with cat /proc/swaps or swapon -s " top and free can be used to display memory and swap information. In both applications the memory in use includes read cache which will be released for application use before swapping begins.



Related Questions


Question : All of the files required for running a particular YARN application will be put here(Path)
for the duration of the application run. Which of the following property in yarn-site.xml, will be used to configure this path

 :  All of the files required for running a particular YARN application will be put here(Path)
1. yarn.nodemanager.log-dirs

2. yarn.nodemanager.local-dirs

3. yarn.nodemanager.remote-app-log-dir

4. yarn.nodemanager.dirs





Question : Which of the following component in the MRv maintains the History of the JOB

 : Which of the following component in the MRv maintains the History of the JOB
1. MapReduce Server
2. MapReduce JobHistory Server
3. Application Master
4. 2 and 3
5. 1 , 2 and 3


Question : YARN requires a staging directory for temporary files created by running jobs. By default it creates /tmp/hadoop-yarn/staging
But user can not run the jobs, what could be reason.

 : YARN requires a staging directory for temporary files created by running jobs. By default it creates /tmp/hadoop-yarn/staging
1. Directory path is not correct
2. stagging directory is full
3. Directry has restrictive permissions
4. None of the above



Question : In MrV Map or Reduce tasks runs in a contatiner, which of the following compoent is responsible for launching that container
 :   In MrV Map or Reduce tasks runs in a contatiner, which of the following compoent is responsible for launching that container
1. JobHistoryServer
2. NodeManager
3. Application Master
4. Resource Manager


Question : Which of the follwoing is the required properties to run YARN architecture

 :  Which of the follwoing is the required properties to run YARN architecture
1. yarn-site.xml: yarn.resourcemanager.hostname
your.hostname.com

2. yarn-site.xml: yarn.nodemanager.aux-services
mapreduce_shuffle

3. mapred-site.xml:mapreduce.framework.name
yarn

4. All 1,2 and 3
5. No, configuration is needed for CDH5, by default it will run in YARN mode


Question : In MR, each node was configured with a fixed number of map slots and a fixed number of reduce slots.
Under YARN, there is no distinction between resources available for maps and resources available for reduces - all resources are available for both



 :  In MR, each node was configured with a fixed number of map slots and a fixed number of reduce slots.
1. True
2. False