Question : You are working on a project of HadoopExam client where you need to chain together MapReduce and Pig jobs. You also need the ability to use forks, decision points, and path joins. Which of the following ecosystem projects allows you to accomplish this?
Explanation: Oozie workflow is a collection of actions (i.e. Hadoop Map/Reduce jobs, Pig jobs) arranged in a control dependency DAG (Direct Acyclic Graph), specifying a sequence of actions execution. This graph is specified in hPDL (a XML Process Definition Language). hPDL is a fairly compact language, using a limited amount of flow control and action nodes. Control nodes define the flow of execution and include beginning and end of a workflow (start, end and fail nodes) and mechanisms to control the workflow execution path ( decision, fork and join nodes). Workflow definitions Currently running workflow instances, including instance states and variables Oozie is a Java Web-Application that runs in a Java servlet-container - Tomcat and uses a database to storeHUE is a GUI for interacting with a Hadoop cluster. Sqoop is a tool for transferring data between HDFS and other external data stores. HBase is a distributed key-value store. Zookeeper is a distributed coodination engine. Oozie is a workflow and orchestration framework.
Watch the training from http://hadoopexam.com/index.html/#hadoop-training
Question : You have the following key-value pairs as output from your Map task: (HadoopExam, 1) (Is, 1) (the, 1) (best, 1) (material, 1) (provider, 1) (for, 1) (the, 1) (Hadoop, 1) How many keys will be passed to the Reducer's reduce() method?
Correct Answer : Get Lastest Questions and Answer : Explanation: Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead. Number of Maps : The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute. Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps. The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data. Number of Reduces : The ideal reducers should be the optimal value that gets them closest to: * A multiple of the block size * A task time between 5 and 15 minutes * Creates the fewest files possible Anything other than that means there is a good chance your reducers are less than great. There is a tremendous tendency for users to use a REALLY high value ("More parallelism means faster!") or a REALLY low value ("I don't want to blow my namespace quota!"). Both are equally dangerous, resulting in one or more of: * Terrible performance on the next phase of the workflow * Terrible performance due to the shuffle * Terrible overall performance because you've overloaded the namenode with objects that are ultimately useless * Destroying disk IO for no really sane reason * Lots of network transfers due to dealing with crazy amounts of CFIF/MFIF work Now, there are always exceptions and special cases. One particular special case is that if following that advice makes the next step in the workflow do ridiculous things, then we need to likely 'be an exception' in the above general rules of thumb. Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces less than heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound. The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).When the number of reduce tasks is set to zero, no reduce tasks are executed for that job. The intermediate data produced by the map phase is copied into HDFS as the output without modification. The intermediate data from each mapper becomes a single output file in HDFS. For more information about running a job with zero reducersThe Reducer collects all the values associated with a given key together, and passes them to the reduce() method in a single call to that method. Because the key 'the' appears twice in the list, the reduce() method will be called once for that key, with a list of the two values. Each other keys appear only once, so there will be a total of six calls to the reduce() method, one for each unique key. For more information, see chapter 6 of Hadoop: The Definitive Guide, 3rd Edition in the Shuffle and Sort: The Reducer Side section.
Watch the training from http://hadoopexam.com/index.html/#hadoop-training
Question : While processing the file using MapReduce framework, the output of the Mapper which we call as intermediate key-value pairs, select the correct statemen for this output of the mappers. 1. Intermediate key-value pairs are written to the HDFS of the machines running the map tasks, and then copied to the machines running the reduce tasks. 2. Intermediate key-value pairs are written to the local disks of the machines running the reduce tasks. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Intermediate key-value pairs are written to the local disks of the machines running the map tasks, and then read by the machines running the reduce tasks.
Explanation: How is intermediate data organized? There are 2 kinds of intermediate data, intermediate data on the mapper and intermediate data on the reducer. Intermediate data on the mapper is map from key to value. Intermediate data on reducer are ALL values of certain set of keys, partitioned by the key. If you want even more details than this then it goes into the file system organization, than I would redirect you to read the source code of Hadoop. I am not sure you need to know that, you might but I am not you so I don't know your need. What happens when there is too much data about a key to be held by a single file? Then you keep them in multiple files BUT when you call the reducer on a certain key, you must be able to get access to all of them (all values of any key)All intermediate output generated by the mappers is written to local disk. Because it is intermediate data, storing it in HDFS with replication would be excessive. When the data is ready for transfer, the reduce tasks will copy the data from the nodes that ran the map tasks.
Watch the training from http://hadoopexam.com/index.html/#hadoop-training
1. Hive is a part of the Apache Hadoop project that provides SQL like interface for data processing 2. Hive is one component of the Hadoop framework that allows for collecting data together into an external repository 3. Access Mostly Uused Products by 50000+ Subscribers 4. HIVE is part of the Apache Hadoop project that enables in-memory analysis of real-time streams of data
1. The Hadoop administrator has to set the number of the reducer slot to zero on all slave nodes. This will disable the reduce step. 2. It is imposible to disable the reduce step since it is critical part of the Mep-Reduce abstraction. 3. Access Mostly Uused Products by 50000+ Subscribers 4. While you cannot completely disable reducers you can set output to one. There needs to be at least one reduce step in Map-Reduce abstraction.
1. The default input format is xml. Developer can specify other input formats as appropriate if xml is not the correct input 2. There is no default input format. The input format always should be specified. 3. Access Mostly Uused Products by 50000+ Subscribers 4. The default input format is TextInputFormat with byte offset as a key and entire line as a value
1. In order to overwrite default input format, the Hadoop administrator has to change default settings in config file 2. In order to overwrite default input format, a developer has to set new input format on job config before submitting the job to a cluster 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of these answers are correct Solution : 21