Question : HadoopExam stores everyday, the users IP address+location as a string in the file as well as number of total clicks as an Integer (Incremented for each click) and this is quite huge file, where the keys are strings (address+location), and the values are integers (clicks). For each unique key, you want to identify the largest integer. In writing a MapReduce program to accomplish this, using the combine is advantageous ? 1. Yes 2. No 3. Access Mostly Uused Products by 50000+ Subscribers 4. Yes, if configured while cluster setup
Explanation: Combiner: The pipeline showed earlier omits a processing step which can be used for optimizing bandwidth usage by your MapReduce job. Called the Combiner, this pass runs after the Mapper and before the Reducer. Usage of the Combiner is optional. If this pass is suitable for your job, instances of the Combiner class are run on every node that has run map tasks. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers. The Combiner is a "mini-reduce" process which operates only on data generated by one machine. Word count is a prime example for where a Combiner is useful. The Word Count program in listings 1--3 emits a (word, 1) pair for every instance of every word it sees. So if the same document contains the word "cat" 3 times, the pair ("cat", 1) is emitted three times; all of these are then sent to the Reducer. By using a Combiner, these can be condensed into a single ("cat", 3) pair to be sent to the Reducer. Now each node only sends a single value to the reducer for each word -- drastically reducing the total bandwidth required for the shuffle process, and speeding up the job. The best part of all is that we do not need to write any additional code to take advantage of this! If a reduce function is both commutative and associative, then it can be used as a Combiner as well. You can enable combining in the word count program by adding the following line to the driver:
conf.setCombinerClass(Reduce.class); The Combiner should be an instance of the Reducer interface. If your Reducer itself cannot be used directly as a Combiner because of commutativity or associativity, you might still be able to write a third class to use as a Combiner for your job.The only affect a combiner has is to reduce the number of records that are passed from the mappers to the reducers in the shuffle and sort phase. For more information on combiners, see chapter 2 of Hadoop: The Definitive Guide, 3rd Edition in the Scaling Out: Combiner Functions section.The average operation is commutative, but not associative, so it is not a natural fit for using a combiner. It would, however, be possible to define a custom intermediate data type that stores both the computed average and the number of records that are represented by that average. The reducers could then use that information to unroll the averages computed by the combiners and calculate a total average for each key. Note that in that case, the reducer could not be reused as the combiner; the combiner would have to be a separate implementation. For more information on combiners, see chapter 2 of Hadoop: The Definitive Guide, 3rd Edition in the Scaling Out: Combiner Functions section.The maximum operation is both associative and commutative, so it is a candidate for using a combiner. Watch the training from http://hadoopexam.com/index.html/#hadoop-training
Question : A MapReduce program has two components: one that implements the mapper, and another that implements the reducer. You have to implement map() method for the Mapper and reduce() method for the reducer. When is the earliest that the reduce() method of any reduce task of your submitted job will be called? 1. Not until all map tasks have completed 2. As soon as first map tasks have completed 3. Access Mostly Uused Products by 50000+ Subscribers 4. It can be started any time during the Job no particular time
Correct Answer : Get Lastest Questions and Answer : Explanation: Reducer has 3 primary phases: Shuffle The Reducer copies the sorted output from each Mapper using HTTP across the network. Sort : The framework merge sorts Reducer inputs by keys (since different Mappers may have output the same key). The shuffle and sort phases occur simultaneously i.e. while outputs are being fetched they are merged. SecondarySort : To achieve a secondary sort on the values returned by the value iterator, the application should extend the key with the secondary key and define a grouping comparator. The keys will be sorted using the entire key, but will be grouped using the grouping comparator to decide which keys and values are sent in the same call to reduce.The grouping comparator is specified via Job.setGroupingComparatorClass(Class). The sort order is controlled by Job.setSortComparatorClass(Class). For example, say that you want to find duplicate web pages and tag them all with the url of the "best" known example. You would set up the job like: Map Input Key: url, Map Input Value: document , Map Output Key: document checksum, url pagerank , Map Output Value: url , Partitioner: by checksum , OutputKeyComparator: by checksum and then decreasing pagerank , OutputValueGroupingComparator: by checksum Reduce : No reduce task's reduce() method is called until all map tasks have completed. Every reduce task'sreduce() method expects to receive its data in sorted order. In this phase the reduce(Object, Iterable, Context) method is called for each (key, (collection of values)) in the sorted inputs. The output of the reduce task is typically written to a RecordWriter via TaskInputOutputContext.write(Object, Object). The output of the Reducer is not re-sorted. If the reduce() method is called before all of the map tasks have completed, it would be possible that the reduce() method would receive the data out of order. For more information about the shuffle and sort phase
Watch the training from http://hadoopexam.com/index.html/#hadoop-training
Question : While processing Timeseries data of the QuickTechi Inc log file using MapReduce ETL batch job you have set up the number of reducers to 1 (one) . Select the correct statment which applies. 1. A single reducer gathers and processes all the output from all the mappers. The output is written to a multiple file in HDFS. 2. Number of reducers can not be configured, it is determined by the NameNode during runtime. 3. Access Mostly Uused Products by 50000+ Subscribers 4. A single reducer will process all the output from all the mappers. The output is written to a single file in HDFS.
Explanation: Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead. Number of Maps : The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute. Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps. The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data. Number of Reduces : The ideal reducers should be the optimal value that gets them closest to: * A multiple of the block size * A task time between 5 and 15 minutes * Creates the fewest files possible Anything other than that means there is a good chance your reducers are less than great. There is a tremendous tendency for users to use a REALLY high value ("More parallelism means faster!") or a REALLY low value ("I don't want to blow my namespace quota!"). Both are equally dangerous, resulting in one or more of: * Terrible performance on the next phase of the workflow * Terrible performance due to the shuffle * Terrible overall performance because you've overloaded the namenode with objects that are ultimately useless * Destroying disk IO for no really sane reason * Lots of network transfers due to dealing with crazy amounts of CFIF/MFIF work Now, there are always exceptions and special cases. One particular special case is that if following that advice makes the next step in the workflow do ridiculous things, then we need to likely 'be an exception' in the above general rules of thumb. Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces less than heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound. The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).When the number of reduce tasks is set to zero, no reduce tasks are executed for that job. The intermediate data produced by the map phase is copied into HDFS as the output without modification. The intermediate data from each mapper becomes a single output file in HDFS. For more information about running a job with zero reducersWhen the number of reduce tasks is set to one, a single reduce task is executed for that job. That reducer processes all intermediate data produced by the map phase and produces a single output file in HDFS. For more information about how shuffle and sort and reduce phases work,
Watch the training from http://hadoopexam.com/index.html/#hadoop-training
1. The most common problem with map-side joins is introducing a high level of code complexity. This complexity has several downsides: increased risk of bugs and performance degradation. Developers are cautioned to rarely use map-side joins. 2. The most common problem with map-side joins is lack of the avaialble map slots since map-side joins require a lot of mappers. 3. Access Mostly Uused Products by 50000+ Subscribers 4. The most common problem with map-side join is not clearly specifying primary index in the join. This can lead to very slow performance on large datasets.
1. No. The configuration settings in the configuration file takes precedence 2. Yes. The configuration settings using Java API take precedence 3. Access Mostly Uused Products by 50000+ Subscribers 4. Only global configuration settings are captured in configuration files on namenode. There are only a very few job parameters that can be set using Java API
Question : What is distributed cache? 1. The distributed cache is special component on namenode that will cache frequently used data for faster client response. It is used during reduce step 2. The distributed cache is special component on datanode that will cache frequently used data for faster client response. It is used during map step 3. Access Mostly Uused Products by 50000+ Subscribers 4. The distributed cache is a component that allows developers to deploy jars for Map-Reduce processing.
1. Writable is a java interface that needs to be implemented for streaming data to remote servers. 2. Writable is a java interface that needs to be implemented for HDFS writes. 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of these answers are corrects
1. Writable data types are specifically optimized for network transmissions 2. Writable data types are specifically optimized for file system storage 3. Access Mostly Uused Products by 50000+ Subscribers 4. Writable data types are specifically optimized for data retrieval