Premium

Cloudera Hadoop Developer Certification Questions and Answer (Dumps and Practice Questions)



Question : You have created a MapReduce job to process TimeSeries Market Data file with the driver class called
HadoopDriver (in the default package) packaged into a jar called HadoopExam.jar, what is the appropriate way to submit this job to the cluster?
 : You have created a MapReduce job to process TimeSeries Market Data file with the driver class called
1. hadoop jar HadoopExam.jar HadoopDriver outputdir inputdir
2. hadoop inputdir outputdir jar HadoopExam.jar HadoopDriver
3. Access Mostly Uused Products by 50000+ Subscribers
4. hadoop jar HadoopExam.jar HadoopDriver inputdir outputdir

Correct Answer : Get Lastest Questions and Answer :
Explanation: Runs a jar file. Users can bundle their Map Reduce code in a jar file and execute it using this command.

Usage: hadoop jar (jar) [driver_class] args...

The streaming jobs are run via this command. Examples can be referred from Streaming examples

Word count example is also run using jar command. It can be referred from Wordcount example
Watch the training from http://hadoopexam.com/index.html/#hadoop-training





Question : To analyze the website click of HadoopExam.com you have written a Mapreduce job, which
will product the click reports for each week e.g. 53 reports for whole year.Which of the following Hadoop API class you must use
so that output file generated as per the weeks and output data will go in corresponding output file.
 : To analyze the website click of HadoopExam.com you have written a Mapreduce job, which
1. Hive
2. MapReduce Chaining
3. Access Mostly Uused Products by 50000+ Subscribers
4. Partitioner

Correct Answer : Get Lastest Questions and Answer :

Explanation: Partitions the key space.

Partitioner controls the partitioning of the keys of the intermediate map-outputs. The key (or a subset of the key) is used to derive the partition, typically by a hash function. The total number of partitions is the same as the number of reduce tasks for the job. Hence this controls which of the m reduce tasks the intermediate key (and hence the record) is sent for reduction.Note: If you require your Partitioner class to obtain the Job's configuration object, implement the Configurable interface.When there is more than one reducer, the map tasks partition their output among the reducers using a partitioning function. By default the partitioning function uses the hash code of the key to identify the partition, but the partitioning function can be overridden with a user-defined partitioning function. A user-defined partitioning function must be a class that implements the Partitioner interface. By implementing a custom partitioner you can assign each key-value pair produced by the mappers to a specific partition. To produce 53 separate report files, the job will have to be run with 12 reducers, one for each a month. Your custom partitioner must examine the data to determine the month and use that information to set the partition. For more information, see chapter 2 of Hadoop: The Definitive Guide, 3rd Edition in the Scaling Out: Data Flow section. "Partitioning" is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same. If the key "cat" is generated in two separate (key, value) pairs, they must both be reduced together. It is also important for performance reasons that the mappers be able to partition data independently -- they should never need to exchange information with one another to determine the partition for a particular key. Hadoop uses an interface called Partitioner to determine which partition a (key, value) pair will go to. A single partition refers to all (key, value) pairs which will be sent to a single reduce task. Hadoop MapReduce determines when the job starts how many partitions it will divide the data into. If twenty reduce tasks are to be run (controlled by the JobConf.setNumReduceTasks()) method), then twenty partitions must be filled.


Watch the training from http://hadoopexam.com/index.html/#hadoop-training





Question : Reducers are generally helpful to write the job ouput data in desried location or database.
In your ETL MapReduce job you set the number of reducer to zero, select the correct statement which applies.
 : Reducers are generally helpful to write the job ouput data in desried location or database.
1. You can not configure number of reducer
2. No reduce tasks execute. The output of each map task is written to a separate file in HDFS
3. Access Mostly Uused Products by 50000+ Subscribers
4. You can not configure number of reducer, it is decided by Tasktracker at runtime

Correct Answer : Get Lastest Questions and Answer :

Explanation: Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead.
Number of Maps : The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute. Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps. The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.

Number of Reduces : The ideal reducers should be the optimal value that gets them closest to:
* A multiple of the block size * A task time between 5 and 15 minutes * Creates the fewest files possible
Anything other than that means there is a good chance your reducers are less than great. There is a tremendous tendency for users to use a REALLY high value ("More parallelism means faster!") or a REALLY low value ("I don't want to blow my namespace quota!"). Both are equally dangerous, resulting in one or more of:
* Terrible performance on the next phase of the workflow * Terrible performance due to the shuffle * Terrible overall performance because you've overloaded the namenode with objects that are ultimately useless * Destroying disk IO for no really sane reason * Lots of network transfers due to dealing with crazy amounts of CFIF/MFIF work Now, there are always exceptions and special cases. One particular special case is that if following that advice makes the next step in the workflow do ridiculous things, then we need to likely 'be an exception' in the above general rules of thumb. Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces less than heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound. The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).When the number of reduce tasks is set to zero, no reduce tasks are executed for that job. The intermediate data produced by the map phase is copied into HDFS as the output without modification. The intermediate data from each mapper becomes a single output file in HDFS. For more information about running a job with zero reducers


Watch the training from http://hadoopexam.com/index.html/#hadoop-training



Related Questions


Question :

What happens if mapper output does not match reducer input?

 :
1. Hadoop API will convert the data to the type that is needed by the reducer.
2. Data input/output inconsistency cannot occur. A preliminary validation check is executed prior
to the full execution of the job to ensure there is consistency.
3. Access Mostly Uused Products by 50000+ Subscribers
4. A real-time exception will be thrown and map-reduce job will fail




Question :

Can you provide multiple input paths to a map-reduce jobs?
 :
1. Yes, but only in Hadoop 0.22+
2. No, Hadoop always operates on one input directory
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but the limit is currently capped at 10 input paths.




Question :

Can you assign different mappers to different input paths?

 :
1. Yes, but only if data is identical.
2. Yes, different mappers can be assigned to different directories
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes but only in Hadoop .22+





Question : Can you suppress reducer output?

 :  Can you suppress reducer output?
1. Yes, there is a special data type that will suppress job output
2. No, map reduce job will always generate output.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Yes, but only during map execution when reducers have been set to zero




Question :Is there a map input format?

  :Is there a map input format?
1. Yes, but only in Hadoop 0.22+
2. Yes, there is a special format for map files
3. Access Mostly Uused Products by 50000+ Subscribers
4. Both 2 and 3 are correct answers.




Question :What is the most important feature of map-reduce


 :What is the most important feature of map-reduce
1. Ability to store large amount of data
2. Ability to process data on the cluster of the machines without copying all the data over
3. Access Mostly Uused Products by 50000+ Subscribers
4. Ability to process large amounts of data in parallel