Premium

Mapr (HP) Hadoop Developer Certification Questions and Answers (Dumps and Practice Questions)



Question :Number of reducer is defined by the user ?

  :Number of reducer is defined by the user ?
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :







Question : Developer has submitted the YARN Job, by calling submitApplication() method on Resource Manager.
Please select the correct order of the below steps after that

1. Container will be managed by Node Manager after job submission
2. Resource Manager triggers its sub-component Scheduler, which allocates containers for mapreduce job execution.
3. Access Mostly Uused Products by 50000+ Subscribers

  : Developer has submitted the YARN Job, by calling submitApplication() method on Resource Manager.
1. 2,3,1
2. 1,2,3
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,3,2

Correct Answer : Get Lastest Questions and Answer :


Explanation: Job Start up:
The call to Job.waitForCompletion() in the main driver class is where all the execution starts. The driver is the only piece of code that runs on our local machine, and this call
starts the communication with the Resource Manager.
Retrieves the new Job ID or Application ID from Resource Manager.
The Client Node copies Job Resources specified via the -files, -archives, and -libjars command-line arguments, as well as the job JAR file on to HDFS.
Finally, Job is submitted by calling submitApplication() method on Resource Manager.
Resource Manager triggers its sub-component Scheduler, which allocates containers for mapreduce job execution. Then Resource Manager starts Application Master in the container
provided by the scheduler. This container will be managed by Node Manager from here on wards.

You can also Refer/Consider Advance Hadoop YARN Training by HadoopExam.com





Question : Which statement is correct for below code snippet

public class TokenCounterMapper
extends Mapper [Object, Text, Text, IntWritable>{

private final static IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
}
context.write(word, one);

}
}


  : Which statement is correct for below code snippet
1. All key value pair will be written to context
2. Some key value pair will be written to context
3. Access Mostly Uused Products by 50000+ Subscribers
4. No key value pair will be written to context

Correct Answer : Get Lastest Questions and Answer :




Related Questions


Question : Which describes how a client reads a file from HDFS?

 : Which describes how a client reads a file from HDFS?
1. The client queries the NameNode for the block location(s). The NameNode returns the block location(s) to the client. The client reads the data directory off the DataNode(s).

2. The client queries all DataNodes in parallel. The DataNode that contains the requested data responds directly to the client. The client reads the data directly off the DataNode.

3. The client contacts the NameNode for the block location(s). The NameNode then queries the DataNodes for block locations. The DataNodes respond to the NameNode,
and the NameNode redirects the client to the DataNode that holds the requested data block(s). The client then reads the data directly off the DataNode.
4. The client contacts the NameNode for the block location(s). The NameNode contacts the DataNode that holds the requested data block. Data is transferred from the
DataNode to the NameNode, and then from the NameNode to the client.


Question : Can you use MapReduce to perform a relational join on two large tables sharing a key? Assume that the two tables are formatted as comma-separated files in HDFS.

 : Can you use MapReduce to perform a relational join on two large tables sharing a key? Assume that the two tables are formatted as comma-separated files in HDFS.
1. Yes.

2. Yes, but only if one of the tables fits into memory

3. Yes, so long as both tables fit into memory.

4. No, MapReduce cannot perform relational operations.

5. No, but it can be done with either Pig or Hive.


Question : A NameNode in Hadoop . manages ______________.

 : A NameNode in Hadoop . manages ______________.
1. Two namespaces: an active namespace and a backup namespace

2. A single namespace

3. An arbitrary number of namespaces

4. No namespaces


Question : In Hadoop ., which one of the following statements is true about a standby NameNode? The Standby NameNode:

 : In Hadoop ., which one of the following statements is true about a standby NameNode? The Standby NameNode:
1. Communicates directly with the active NameNode to maintain the state of the active NameNode.

2. Receives the same block reports as the active NameNode.

3. Runs on the same machine and shares the memory of the active NameNode.

4. Processes all client requests and block reports from the appropriate DataNodes.


Question : Identify the MapReduce v (MRv / YARN) daemon responsible for launching application containers and monitoring application resource usage?

 : Identify the MapReduce v (MRv / YARN) daemon responsible for launching application containers and monitoring application resource usage?
1. ResourceManager

2. NodeManager

3. ApplicationMaster

4. ApplicationMasterService

5. TaskTracker



Question : A client application creates an HDFS file named foo.txt with a replication factor of . Identify which best describes the file access rules in HDFS
if the file has a single block that is stored on data nodes A, B and C?

 : A client application creates an HDFS file named foo.txt with a replication factor of . Identify which best describes the file access rules in HDFS
1. The file will be marked as corrupted if data node B fails during the creation of the file.

2. Each data node locks the local file to prohibit concurrent readers and writers of the file.

3. Each data node stores a copy of the file in the local file system with the same name as the HDFS file.

4. The file can be accessed if at least one of the data nodes storing the file is available.