Question : The logical records that FileInputFormats define do not usually fit neatly into HDFS blocks. For example, a TextInputFormat's logical records are lines, which will cross HDFS boundaries more often than not. This has no bearing on the functioning of your program-lines are not missed or broken, for example-but it's worth knowing about, as it does mean that data-local maps (that is, maps that are running on the same host as their input data) will perform some remote reads. The slight overhead this causes is not normally significant. With the latest version of Hadoop , which also include MR2. You submitted a job to process www.HadoopExam.com single log file , which is made up of two blocks, named BLOCKX and BLOCKY. BLOCKX is on nodeA, and is being processed by a Mapper running on that node. BLOCKY is on nodeB. A record spans the two blocks that is, the first part of the record is in BLOCKX, but the end of the record is in BLOCKY. What happens as the record is being read by the Mapper on NODEA? 1. The remaining part of the record is streamed across the network from either nodeA or nodeB 2. The remaining part of the record is streamed across the network from nodeA 3. Access Mostly Uused Products by 50000+ Subscribers 4. The remaining part of the record is streamed across the network from nodeB
Correct Answer : Get Lastest Questions and Answer : Explanation: Interesting question, I spent some time looking at the code for the details and here are my thoughts. The splits are handled by the client by InputFormat.getSplits, so a look at FileInputFormat gives the following info:For each input file, get the file length, the block size and calculate the split size as max(minSize, min(maxSize, blockSize)) where maxSize corresponds to mapred.max.split.size and minSize is mapred.min.split.size. Divide the file into different FileSplits based on the split size calculated above. What's important here is that each FileSplit is initialized with a start parameter corresponding to the offset in the input file. There is still no handling of the lines at that point. The relevant part of the code looks like this: while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) { int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining); splits.add(new FileSplit(path, length-bytesRemaining, splitSize, blkLocations[blkIndex].getHosts())); bytesRemaining -= splitSize; } After that, if you look at the LineRecordReader which is defined by the TextInputFormat, that's where the lines are handled: When you initialize your LineRecordReader it tries to instantiate a LineReader which is an abstraction to be able to read lines over FSDataInputStream. There are 2 cases: If there is a CompressionCodec defined, then this codec is responsible for handling boundaries. Probably not relevant to your question. If there is no codec however, that's where things are interesting: if the start of your InputSplit is different than 0, then you backtrack 1 character and then skip the first line you encounter identified by \n or \r\n (Windows) ! The backtrack is important because in case your line boundaries are the same as split boundaries, this ensures you do not skip the valid line. Here is the relevant code: It is very typical for record boundaries not to coincide with block boundaries. If this is the case, the Mapper which is reading the block simply requests more of the file in order to read the rest of the record -- which results in that extra data being streamed across the network. Moving the block would take far longer and would be wasteful. if (codec != null) { in = new LineReader(codec.createInputStream(fileIn), job); end = Long.MAX_VALUE; } else { if (start != 0) { skipFirstLine = true; --start; fileIn.seek(start); } in = new LineReader(fileIn, job); } if (skipFirstLine) { // skip first line and re-establish "start". start += in.readLine(new Text(), 0, (int)Math.min((long)Integer.MAX_VALUE, end - start)); } this.pos = start; So since the splits are calculated in the client, the mappers don't need to run in sequence, every mapper already knows if it neds to discard the first line or not. So basically if you have 2 lines of each 100Mb in the same file, and to simplify let's say the split size is 64Mb. Then when the input splits are calculated, we will have the following scenario: Split 1 containing the path and the hosts to this block. Initialized at start 200-200=0Mb, length 64Mb. Split 2 initialized at start 200-200+64=64Mb, length 64Mb. Split 3 initialized at start 200-200+128=128Mb, length 64Mb. Split 4 initialized at start 200-200+192=192Mb, length 8Mb. Mapper A will process split 1, start is 0 so don't skip first line, and read a full line which goes beyond the 64Mb limit so needs remote read. Mapper B will process split 2, start is != 0 so skip the first line after 64Mb-1byte, which corresponds to the end of line 1 at 100Mb which is still in split 2, we have 28Mb of the line in split 2, so remote read the remaining 72Mb. Mapper C will process split 3, start is != 0 so skip the first line after 128Mb-1byte, which corresponds to the end of line 2 at 200Mb, which is end of file so don't do anything. Mapper D is the same as mapper C except it looks for a newline after 192Mb-1byte. Watch the training from http://hadoopexam.com/index.html/#hadoop-training
Question : If you run the word count MapReduce program with m map tasks and r reduce tasks, how many output files will you get at the end of the job, and how many key-value pairs will there be in each file? Assume k is the number of unique words in the input files. (The word count program reads text input and produces output that contains every distinct word and the number of times that word occurred anywhere in the text.) 1. There will be r files, each with approximately m/r key-value pairs. 2. There will be m files, each with approximately k/r key-value pairs. 3. Access Mostly Uused Products by 50000+ Subscribers 4. There will be r files, each with approximately k/m key-value pairs.
Explanation: The WordCount application is quite straightforward. The Mapper implementation, via the map method , processes one line at a time, as provided by the specified TextInputFormat. It then splits the line into tokens separated by whitespace, via the StringTokenizer, and emits a key-value pair of [word, 1]. For the given sample input the first map emits: [Hello, 1] [World, 1] [Bye, 1] [World, 1] The second map emits: [Hello, 1] [Hadoop, 1] [Goodbye, 1] [Hadoop, 1] We'll learn more about the number of maps spawned for a given job, and how to control them in a fine-grained manner, a bit later in the tutorial. WordCount also specifies a combiner. Hence, the output of each map is passed through the local combiner (which is same as the Reducer as per the job configuration) for local aggregation, after being sorted on the keys. The output of the first map: [Bye, 1] [Hello, 1] [World, 2] The output of the second map: [Goodbye, 1] [Hadoop, 2] [Hello, 1] The Reducer implementation, via the reduce method just sums up the values, which are the occurrence counts for each key (that is, words in this example). Thus the output of the job is: [Bye, 1] [Goodbye, 1] [Hadoop, 2] [Hello, 2] [World, 2] The run method specifies various facets of the job, such as the input/output paths (passed via the command line), key-value types, input/output formats etc., in the JobConf. It then calls the JobClient.runJob to submit the and monitor its progress.We'll learn more about JobConf, JobClient, Tool, and other interfaces and classes a bit later in the tutorial.The word count job emits each unique word once with the count of the number of occurences of that word. There will therefore be k total words in the output. As the job is executing with r reduce tasks, there will be r output files, one for each mapper. The word keys are distributed more or less evenly among the reducers, so each output file will contian roughly k/r words. Note that the number of map tasks is irrelevant, as the intermediate output from all map tasks is combined together as part of the shuffle phase.
To read more about the word count example, see the Word Count Example on the Hadoop Wiki. For more information about how intermediate and final output are generated the see chapter 2 in Hadoop: The Definitive Guide, 3rd Edition in the Scaling Out: Data Flow section.
Watch the training from http://hadoopexam.com/index.html/#hadoop-training
Question : While processing the MAIN.PROFILE.log generated in the Apache WebServer of the QuickTechie.com website using MapReduce job. There are 100 nodes in the cluster and 3 reducers defined. Which of the reduce tasks will process a Text key which begins with the regular expression "\w+"? 1. First Reducer will process the key, which satisfies the regular expression "\w+" 2. Second Reducer will process the key, which satisfies the regular expression "\w+" 3. Access Mostly Uused Products by 50000+ Subscribers 4. Not enough data to determine which reduce task will receive which key
Correct Answer : Get Lastest Questions and Answer : Explanation: Mapper maps input key/value pairs to a set of intermediate key/value pairs. Maps are the individual tasks that transform input records into intermediate records. The transformed intermediate records do not need to be of the same type as the input records. A given input pair may map to zero or many output pairs. The Hadoop MapReduce framework spawns one map task for each InputSplit generated by the InputFormat for the job. Overall, Mapper implementations are passed the JobConf for the job via the JobConfigurable.configure(JobConf) method and override it to initialize themselves. The framework then calls map(WritableComparable, Writable, OutputCollector, Reporter) for each key/value pair in the InputSplit for that task. Applications can then override the Closeable.close() method to perform any required cleanup. Output pairs do not need to be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to OutputCollector.collect(WritableComparable,Writable). Applications can use the Reporter to report progress, set application-level status messages and update Counters, or just indicate that they are alive. All intermediate values associated with a given output key are subsequently grouped by the framework, and passed to the Reducer(s) to determine the final output. Users can control the grouping by specifying a Comparator via JobConf.setOutputKeyComparatorClass(Class). The Mapper outputs are sorted and then partitioned per Reducer. The total number of partitions is the same as the number of reduce tasks for the job. Users can control which keys (and hence records) go to which Reducer by implementing a custom Partitioner. Users can optionally specify a combiner, via JobConf.setCombinerClass(Class), to perform local aggregation of the intermediate outputs, which helps to cut down the amount of data transferred from the Mapper to the Reducer. The intermediate, sorted outputs are always stored in a simple (key-len, key, value-len, value) format. Applications can control if, and how, the intermediate outputs are to be compressed and the CompressionCodec to be used via the JobConf. How Many Maps? The number of maps is usually driven by the total size of the inputs, that is, the total number of blocks of the input files. The right level of parallelism for maps seems to be around 10-100 maps per-node, although it has been set up to 300 maps for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute. Thus, if you expect 10TB of input data and have a blocksize of 128MB, you'll end up with 82,000 maps, unless setNumMapTasks(int) (which only provides a hint to the framework) is used to set it even higher. Reducer : Reducer reduces a set of intermediate values which share a key to a smaller set of values. The number of reduces for the job is set by the user via JobConf.setNumReduceTasks(int). Overall, Reducer implementations are passed the JobConf for the job via the JobConfigurable.configure(JobConf) method and can override it to initialize themselves. The framework then calls reduce(WritableComparable, Iterator, OutputCollector, Reporter) method for each (key, (list of values)> pair in the grouped inputs. Applications can then override the Closeable.close() method to perform any required cleanup. Reducer has 3 primary phases: shuffle, sort and reduce. Shuffle : Input to the Reducer is the sorted output of the mappers. In this phase the framework fetches the relevant partition of the output of all the mappers, via HTTP. Sort : The framework groups Reducer inputs by keys (since different mappers may have output the same key) in this stage. The shuffle and sort phases occur simultaneously; while map-outputs are being fetched they are merged. Secondary Sort : If equivalence rules for grouping the intermediate keys are required to be different from those for grouping keys before reduction, then one may specify a Comparator via JobConf.setOutputValueGroupingComparator(Class). Since JobConf.setOutputKeyComparatorClass(Class) can be used to control how intermediate keys are grouped, these can be used in conjunction to simulate secondary sort on values. Reduce : In this phase the reduce(WritableComparable, Iterator, OutputCollector, Reporter) method is called for each (key, (list of values)> pair in the grouped inputs. The output of the reduce task is typically written to the FileSystem via OutputCollector.collect(WritableComparable, Writable). Applications can use the Reporter to report progress, set application-level status messages and update Counters, or just indicate that they are alive. The output of the Reducer is not sorted. How Many Reduces? The right number of reduces seems to be 0.95 or 1.75 multiplied by ((no. of nodes> * mapred.tasktracker.reduce.tasks.maximum). With 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. With 1.75 the faster nodes will finish their first round of reduces and launch a second wave of reduces doing a much better job of load balancing. Increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures. The scaling factors above are slightly less than whole numbers to reserve a few reduce slots in the framework for speculative-tasks and failed tasks. Reducer NONE : It is legal to set the number of reduce-tasks to zero if no reduction is desired. In this case the outputs of the map-tasks go directly to the FileSystem, into the output path set by setOutputPath(Path). The framework does not sort the map-outputs before writing them out to the FileSystem.When there is more than one reducer, the map tasks partition their output among the reducers using a partitioning function. By default the partitioning function uses the hash code of the key to identify the partition, but the partitioning function can be overridden with a user-defined partitioning function. Either way, there is not sufficient data to determine which reducer will process the record. For more information, see chapter 2 of Hadoop: The Definitive Guide, 3rd Edition in the Scaling Out: Data Flow section.
Watch the training from http://hadoopexam.com/index.html/#hadoop-training
1. The ApplicationMaster requests resources from the ResourceManager 2. The ApplicationMaster starts a single instance of the ResourceManager 3. Access Mostly Uused Products by 50000+ Subscribers 4. The ApplicationMaster starts an instance of the ResourceManager within each Container
1. When the types of the reduce operation's input key and input value match the types of the reducer's output key and output value and when the reduce operation is both communicative and associative. 2. When the signature of the reduce method matches the signature of the combine method. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Always. The point of a combiner is to serve as a mini-reducer directly after the map phase to increase performance. 5. Never. Combiners and reducers must be implemented separately because they serve different purposes.