Premium
Cloudera Hadoop Developer Certification Questions and Answer (Dumps and Practice Questions)
Question :
Which is the correct command to list all the files from current directory
1. hadoop fs -ls
2. hadoop fs -list
3. Access Mostly Uused Products by 50000+ Subscribers
4. All of the above
Correct Answer
:
Get Lastest Questions and Answer
:
Question :
Which is the correct command ?
1. hadoop fs -ls
2. hadoop fs -copyToLocal /user/pappu/pappu.txt pappu.txt
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2 and 3 are correct
5. Only 1 and 2 are correct
Correct Answer
:
Get Lastest Questions and Answer
:
Question : Which is the correct command to delete the directory
1. hadoop fs -r pappu
2. hadoop fs -remove pappu
3. Access Mostly Uused Products by 50000+ Subscribers
4. hadoop fs -rem pappu
Correct Answer
:
Get Lastest Questions and Answer
:
Related Questions
Question : You are working on a project of HadoopExam client where you need to chain together MapReduce and Pig jobs.
You also need the ability to use forks, decision points, and path joins.
Which of the following ecosystem projects allows you to accomplish this?
1. Oozie
2. MapReduce chaining
3. Access Mostly Uused Products by 50000+ Subscribers
4. Zookeeper
5. Hue
Question :
You have the following key-value pairs as output from your Map task:
(HadoopExam, 1)
(Is, 1)
(the, 1)
(best, 1)
(material, 1)
(provider, 1)
(for, 1)
(the, 1)
(Hadoop, 1)
How many keys will be passed to the Reducer's reduce() method?
1. 9
2. 8
3. Access Mostly Uused Products by 50000+ Subscribers
4. 6
5. 5
Question : While processing the file using MapReduce framework, the output of the Mapper which we call as
intermediate key-value pairs, select the correct statemen for this output of the mappers.
1. Intermediate key-value pairs are written to the HDFS of the machines running the map tasks, and then copied to the machines running the reduce tasks.
2. Intermediate key-value pairs are written to the local disks of the machines running the reduce tasks.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Intermediate key-value pairs are written to the local disks of the machines running the map tasks, and then read by the machines running the reduce tasks.
Question : HadoopExam stores everyday, the users IP address+location as a string in the file as well as
number of total clicks as an Integer (Incremented for each click) and this is quite huge file,
where the keys are strings (address+location), and the values are integers (clicks).
For each unique key, you want to identify the largest integer. In writing a MapReduce program to accomplish this,
using the combine is advantageous ?
1. Yes
2. No
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, if configured while cluster setup
Question : A MapReduce program has two components: one that implements the mapper, and another that implements the reducer. You have to implement
map() method for the Mapper and reduce() method for the reducer. When is the earliest that the reduce() method of any reduce task of your submitted
job will be called?
1. Not until all map tasks have completed
2. As soon as first map tasks have completed
3. Access Mostly Uused Products by 50000+ Subscribers
4. It can be started any time during the Job no particular time
Question : While processing Timeseries data of the QuickTechi Inc log file using MapReduce ETL batch job you have set up the number of reducers
to 1 (one) . Select the correct statment which applies.
1. A single reducer gathers and processes all the output from all the mappers. The output is written to a multiple file in HDFS.
2. Number of reducers can not be configured, it is determined by the NameNode during runtime.
3. Access Mostly Uused Products by 50000+ Subscribers
4. A single reducer will process all the output from all the mappers. The output is written to a single file in HDFS.