Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question : Select correct statements regarding ExpressLane features of MapR
A. It allows small jobs to be executed before large jobs
B. ExpressLane feature reserves one or more map or reduce slots on each task tracker
C. If there is no slot available to run small job, it will be executed on "ephemeral slots"
D. It requires to use fair scheduler
E. If Small job is running on ephemeral slots and found to violet the definition of "small" , then job will be killed and re-scheduled as a normal job.
 : Select correct statements regarding ExpressLane features of MapR
1. A,B,C
2. C,D,E
3. Access Mostly Uused Products by 50000+ Subscribers
4. A,B,E
5. A,B,C,D,E

Correct Answer : Get Lastest Questions and Answer :
Explanation:




Question : MapR Local volume is


 : MapR Local volume is
1. Replicated across the cluster node. Hence, it never fills up.

2. never replicated across the nodes

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,3

5. 2,3


Correct Answer : Get Lastest Questions and Answer :
Explanation: MapR has made performance optimizations to the shuffle process, in which output from Mappers are sent to reducers. First, instead
of writing intermediate data to local disks controlled by the operating system,
MapR writes to a MapR-FS volume limited by its topology to the local node.
This improves performance and reduces demand on local disk space while making the output available cluster-wide.
The direct shuffle leverages the underlying storage layer and takes advantage of its unique capabilities:
High sequential and random I/O performance, including the ability to create millions of files at extremely high rates (using sequential I/O)
The ability to leverage multiple NICs via RPC-level bonding. By comparison, the shuffle in other distributions can only leverage a single NIC (in theory,
one could use port trunking in any distribution,
but the performance gains would be minimal compared to the MapR distribution s RPC-level load balancing)
The ability to compress data at the block level





Question : How will MapR make sure, that maximum local volume is available for direct shuffle?


 : How will MapR make sure, that maximum local volume is available for direct shuffle?
1. It reserve few nodes in the cluster and don t store or replicate data on these nodes

2. If there is more space required it delete the data replicated on this node and deleted data will be replicated on another node in cluster

3. Access Mostly Uused Products by 50000+ Subscribers

4. Extra hard disks will be attached few selected nodes in the cluster than average storage size of entire cluster.


Correct Answer : Get Lastest Questions and Answer :
Explanation:


Related Questions


Question : Select correct statements regarding logs in Hadoop program


 : Select correct statements regarding logs in Hadoop program
1. You can configure log levels in commons-logging.properties file and needs to be paced in CLASSPATH

2. Supported log levels are trace, debug, info, warn ,error and fatal. You can also add System.out.prinln as well in the Mapper and Reducer code

3. Access Mostly Uused Products by 50000+ Subscribers

4. 1,2

5. 1,2,3



Question : What is the purpose of a Driver class ?

 : What is the purpose of a Driver class ?
1. It creates the output directory for entire Job

2. It is responsible for reading data from input file and shuffling, sorting of intermediate data.

3. Access Mostly Uused Products by 50000+ Subscribers

4. It kills the job as soon as last reducer finishes


Question : In the MapReduce job, map() method of the Mapper is called for each key-value pairs. Now this key-value pair are collected together and sent toString


 : In the MapReduce job, map() method of the Mapper is called for each key-value pairs. Now this key-value pair are collected together and sent toString
1. RecordReader

2. OutputFormat

3. Access Mostly Uused Products by 50000+ Subscribers

4. Partitioner



Question : Which of the methods are available, if you use mapreduce package Mapper class?


 : Which of the methods are available, if you use mapreduce package Mapper class?
1. map(), setup(), cleanup(),run()

2. map(), reduce(), setup(), cleanup(),run()

3. Access Mostly Uused Products by 50000+ Subscribers

4. map(), reduce(), setup(),run()

5. map(), reduce(), setup()



Question : Which of the following class is responsible for creating key-value pairs from input-split and submitted to Mapper for further processing

 : Which of the following class is responsible for creating key-value pairs from input-split and submitted to Mapper for further processing
1. RecordReader

2. Tasktracker

3. Access Mostly Uused Products by 50000+ Subscribers

4. JobTracker


Question : Which all methods can be called by the run() method of a Mapper/Reducer class
 : Which all methods can be called by the run() method of a Mapper/Reducer class
1. setup(),map(),cleanup() or setup(),reduce(),cleanup()
2. setup(),map(),cleanup() or setup(),reduce()
3. Access Mostly Uused Products by 50000+ Subscribers
4. setup(),map(),reduce(),cleanup()