Premium
Cloudera Hadoop Administrator Certification Certification Questions and Answer (Dumps and Practice Questions)
Question : YARN supports the scheduling of MapReduce job which helps short jobs to complete even when long jobs are running.
1. True
2. False
Correct Answer
:
Get Lastest Questions and Answer
:
Question : YARN supports the scheduling of MapReduce job which helps reducing the memory requirement of the MapReduce job.
1. True
2. False
Correct Answer
:
Get Lastest Questions and Answer
:
Question : YARN supports the scheduling of MapReduce job which helps reducing the cores required for large MapReduce job
1. True
2. False
Correct Answer
:
Get Lastest Questions and Answer
:
Related Questions
Question : You have a website www.QuickTechie.com, where you have one month user profile updates log. Now for the classification analysis
you want to save all the data in a single file called QT31012015.log which is approximately in 30GB in size.
Now you are able to push this full file in a directory on HDFS called /log/QT/QT31012015.log. Now you also get to know
you can store the same data in the HBase as well, because it provides ...
1. Random writes
2. Fault tolerance
3. Access Mostly Uused Products by 50000+ Subscribers
4. Batch processing
5. 2,3
Question : You have setup a Hadoop Cluster in Norman data center, and having all the settings as default., how much data will you be able to store on
your Hadoop cluster if it has 12 nodes with 4TB of raw disk space per node allocated to HDFS storage?
1. Nearly 3TB
2. Nearly 12TB
3. Access Mostly Uused Products by 50000+ Subscribers
4. Nearly 48TB
5. Can not calculate
Question : You have a website www.QuickTechie.com, where you have the entire user profiles stored in the MySQL database.
Now you want to fetch everyday new profiles from this database and store into the HDFS as log file, also you wanted
to have POJO's created to interact with the imported data. Select the tool which perfectly solves above problem.
1. Oozie
2. Hue
3. Access Mostly Uused Products by 50000+ Subscribers
4. Sqoop
5. Pig or Hive
Question : As a Hadoop Developer you always preferred using Mapreduce job chaining to execute multiple MapReduce job
as an output of one Job would be input of another job. But recently you learned that Apache OOzie is the best workflow
engine for Hadoop Jobs. Select the correct statement which you learned about Apache OOzie.
1. MapReduce jobs chain; no Pig or Hive tasks or jobs. These MapReduce sequences can be combined with forks and path joins.
2. Iterative repetition of MapReduce jobs, shell scripts and Quartz scheduler until a desired answer or state is reached.
3. Access Mostly Uused Products by 50000+ Subscribers
4. MapReduce jobs chain and Pig. These sequences can be combined with other actions including forks, decision points, and path joins.
Question : Your cluster has datanodes, each with a single TB hard drive allocated to HDFS storage.
You reserve no disk space for MapReduce. You implement default replication settings. How much data can you store in HDFS (assuming no compression)?
1. about 5 TB
2. about 20 TB
3. Access Mostly Uused Products by 50000+ Subscribers
4. about 30 TB
Question : How do you differentiate between failed task and killed task
1. A
2. B
3. Access Mostly Uused Products by 50000+ Subscribers
4. D