Premium

Cloudera Hadoop Administrator Certification Certification Questions and Answer (Dumps and Practice Questions)



Question :

What is PIG?
 :
1. Pig is a subset fo the Hadoop API for data processing
2. Pig is a part of the Apache Hadoop project that provides scripting languge interface for data processing
3. Access Mostly Uused Products by 50000+ Subscribers
4. None of Above


Correct Answer : Get Lastest Questions and Answer :

Pig is a project that was developed by Yahoo for people with very strong skills in scripting languages.
Using scripting language, it dynamically creates Map Reduce jobs automatically






Question : What is distributed cache?
  : What is distributed cache?
1. The distributed cache is special component on namenode that will cache frequently used data for faster client response.
It is used during reduce step
2. The distributed cache is special component on datanode that will cache frequently used data
for faster client response. It is used during map step
3. Access Mostly Uused Products by 50000+ Subscribers
4. The distributed cache is a component that allows developers to deploy jars for Map-Reduce processing.



Correct Answer : Get Lastest Questions and Answer :


Explanation: Distributed cache is the Hadoop answer to the problem of deploying third-party libraries. Distributed cache will allow libraries to be deployed to all datanodes.





Question : You already have a cluster on the Hadoop MapReduce MRv, but now you have to upgrade the same on MRv but somehow
your management is not agreeing to install Pig. And you have to convince your management for installing the Apache Pig in Hadoop Cluster.
Which is the correct statement which you can use to show the relationship between MapReduce and Apache Pig?
  : You already have a cluster on the Hadoop MapReduce MRv, but now you have to upgrade the same on MRv but somehow
1. Apache Pig rely on MapReduce which allows to do special-purpose processing not provided by MapReduce.
2. Apache Pig comes with no additional capabilities to MapReduce. Pig programs are executed as MapReduce jobs via the Pig interpreter.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Apache Pig comes with the additional capability of allowing you to control the flow of multiple MapReduce jobs.



Correct Answer : Get Lastest Questions and Answer :

Explanation: Apache Pig allows you to write complex MapReduce transformations using a simple scripting language. Pig Latin (the language) defines a set of transformations on a data set such as aggregate, join and sort. Pig translates the Pig Latin script into MapReduce so that it can be executed within Hadoop. Pig Latin is sometimes extended using UDFs (User Defined Functions), which the user can write in Java or a scripting language and then call directly from the Pig Latin.
What Pig Does : Pig was designed for performing a long series of data operations, making it ideal for three categories of Big Data jobs:
Extract transform load (ETL) data pipelines,
Research on raw data, and
Iterative data processing.
Whatever the use case, Pig will be:Extensible. Pig users can create custom functions to meet their particular processing requirements.
Easy to program. Complex tasks involving interrelated data transformations can be simplified and encoded as data flow sequences. Pig programs accomplish huge tasks, but they are easy to write and maintain. Self-optimizing. The system automatically optimizes execution of Pig jobs, so the user can focus on semantics.
How Pig Works : Pig runs on Hadoop and makes use of MapReduce and the Hadoop Distributed File System (HDFS). The language for the platform is called Pig Latin, which abstracts from the Java MapReduce idiom into a form similar to SQL. Pig Latin is a flow language whereas SQL is a declarative language. SQL is great for asking a question of your data, while Pig Latin allows you to write a data flow that describes how your data will be transformed. Since Pig Latin scripts can be graphs (instead of requiring a single output) it is possible to build complex data flows involving multiple inputs, transforms, and outputs. Users can extend Pig Latin by writing their own functions, using Java, Python, Ruby, or other scripting languages. The user can run Pig in two modes: Local Mode. With access to a single machine, all files are installed and run using a local host and file system. MapReduce Mode. This is the default mode, which requires access to a Hadoop cluster.
The user can run Pig in either mode using the "pig" command or the "java" command.Pig is a framework that translates programs written in Pig Latin into jobs that are executed by the MapReduce framework. Pig does not provide any functionality that isn't provided by MapReduce, but it makes some types of data operations significantly easier to perform.



Related Questions


Question : Select the correct command/commands which can be used to Dump the container logs

1. yarn logs -applicationId ApplicationId
2. yarn logs -appOwner AppOwner
3. Access Mostly Uused Products by 50000+ Subscribers
4. yarn logs -nodeAddress NodeAddress

 : Select the correct command/commands which can be used to Dump the container logs
1. 1,2,3
2. 2,3,4
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,2,4
5. All 1,2,3,4


Question : You have a cluster of Nodes in Geneva Datacenter , and you find a specific node in your cluster appears to be running
slower than other nodes with all having same hardware configuration. You think that RAM could be failure in the system.
Which commands may be used to the view the memory seen in the system?

  : You have a cluster of  Nodes in Geneva Datacenter , and you find a specific node in your cluster appears to be running
1. free
2. df
3. Access Mostly Uused Products by 50000+ Subscribers
4. jps


Question : You have a cluster of Nodes in Geneva Datacenter , and you find a specific node in your cluster appears to be running
slower than other nodes with all having same hardware configuration. You think that RAM could be failure in the system.
Which commands may be used to the view the memory seen in the system?
1. free 2. du 3. top 4. dmidecode 5. ramusage 6. ps -aef | grep java 7. memoryusage

  :  You have a cluster of  Nodes in Geneva Datacenter , and you find a specific node in your cluster appears to be running
1. 1,4,5
2. 1,2,4
3. Access Mostly Uused Products by 50000+ Subscribers
4. 1,4,6
5. 1,4,7



Question : You are having a Hadoop cluster in the Geneva Datacenter with a NameNode on host hadoopexam,
a Secondary NameNode on host hadoopexam2 and 1000 slave node. Everyday you have to create a report thrice a day,
when the last checkpoint happened. Select the best way to find this.

 : You are having a Hadoop cluster in the Geneva Datacenter with a NameNode on host hadoopexam,
1. Connect to the web UI of the Primary NameNode (http://hadoopexam1:50090/) and look at the "Last Checkpoint" information.
2. Execute hdfs dfsadmin -lastreport on the command line
3. Access Mostly Uused Products by 50000+ Subscribers
4. With the command line option hdfs dfsadmin -Checkpointinformation



Question : Which of the follwing information can be received by connecting the web UI of the Secondary NameNode

1. NameNode Address
2. Last Checkpoint Time
3. Access Mostly Uused Products by 50000+ Subscribers
4. CheckPoint Size
5. CheckPoint Edit Dirs


  : Which of the follwing information can be received by connecting the web UI of the Secondary NameNode
1. 1,2,3,4
2. 2,3,4,5
3. Access Mostly Uused Products by 50000+ Subscribers
4. All 1,2,3,4,5


Question : In HadoopExam Inc's Geneva Datacenter you have a Hadoop Clucter with NameNode as HadoopExam and Secondary Namenode as HadoopExam all other
remaining nodes are data nodes. A specific node in your cluster appears to be running slower than other nodes with the same hardware configuration.
You suspect that the system is swapping memory to disk due to over allocation of resources. Which commands may be used to view the memory and swap usage on the system?

1. ps -aef | grep java

2. vmstat

3. Access Mostly Uused Products by 50000+ Subscribers

4. du

5. memswap

6. memoryusage

7. top


 : In HadoopExam Inc's Geneva Datacenter you have a Hadoop Clucter with NameNode as HadoopExam and Secondary Namenode as HadoopExam all other
1. 1,5,6
2. 1,4,7
3. Access Mostly Uused Products by 50000+ Subscribers
4. 5,6,7
5. 1,6,7