What is PIG? 1. Pig is a subset fo the Hadoop API for data processing 2. Pig is a part of the Apache Hadoop project that provides scripting languge interface for data processing 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of Above
Pig is a project that was developed by Yahoo for people with very strong skills in scripting languages. Using scripting language, it dynamically creates Map Reduce jobs automatically
Question : What is distributed cache? 1. The distributed cache is special component on namenode that will cache frequently used data for faster client response. It is used during reduce step 2. The distributed cache is special component on datanode that will cache frequently used data for faster client response. It is used during map step 3. Access Mostly Uused Products by 50000+ Subscribers 4. The distributed cache is a component that allows developers to deploy jars for Map-Reduce processing.
Explanation: Distributed cache is the Hadoop answer to the problem of deploying third-party libraries. Distributed cache will allow libraries to be deployed to all datanodes.
Question : You already have a cluster on the Hadoop MapReduce MRv, but now you have to upgrade the same on MRv but somehow your management is not agreeing to install Pig. And you have to convince your management for installing the Apache Pig in Hadoop Cluster. Which is the correct statement which you can use to show the relationship between MapReduce and Apache Pig? 1. Apache Pig rely on MapReduce which allows to do special-purpose processing not provided by MapReduce. 2. Apache Pig comes with no additional capabilities to MapReduce. Pig programs are executed as MapReduce jobs via the Pig interpreter. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Apache Pig comes with the additional capability of allowing you to control the flow of multiple MapReduce jobs.
Explanation: Apache Pig allows you to write complex MapReduce transformations using a simple scripting language. Pig Latin (the language) defines a set of transformations on a data set such as aggregate, join and sort. Pig translates the Pig Latin script into MapReduce so that it can be executed within Hadoop. Pig Latin is sometimes extended using UDFs (User Defined Functions), which the user can write in Java or a scripting language and then call directly from the Pig Latin. What Pig Does : Pig was designed for performing a long series of data operations, making it ideal for three categories of Big Data jobs: Extract transform load (ETL) data pipelines, Research on raw data, and Iterative data processing. Whatever the use case, Pig will be:Extensible. Pig users can create custom functions to meet their particular processing requirements. Easy to program. Complex tasks involving interrelated data transformations can be simplified and encoded as data flow sequences. Pig programs accomplish huge tasks, but they are easy to write and maintain. Self-optimizing. The system automatically optimizes execution of Pig jobs, so the user can focus on semantics. How Pig Works : Pig runs on Hadoop and makes use of MapReduce and the Hadoop Distributed File System (HDFS). The language for the platform is called Pig Latin, which abstracts from the Java MapReduce idiom into a form similar to SQL. Pig Latin is a flow language whereas SQL is a declarative language. SQL is great for asking a question of your data, while Pig Latin allows you to write a data flow that describes how your data will be transformed. Since Pig Latin scripts can be graphs (instead of requiring a single output) it is possible to build complex data flows involving multiple inputs, transforms, and outputs. Users can extend Pig Latin by writing their own functions, using Java, Python, Ruby, or other scripting languages. The user can run Pig in two modes: Local Mode. With access to a single machine, all files are installed and run using a local host and file system. MapReduce Mode. This is the default mode, which requires access to a Hadoop cluster. The user can run Pig in either mode using the "pig" command or the "java" command.Pig is a framework that translates programs written in Pig Latin into jobs that are executed by the MapReduce framework. Pig does not provide any functionality that isn't provided by MapReduce, but it makes some types of data operations significantly easier to perform.
1. Connect to the web UI of the Primary NameNode (http://hadoopexam1:50090/) and look at the "Last Checkpoint" information. 2. Execute hdfs dfsadmin -lastreport on the command line 3. Access Mostly Uused Products by 50000+ Subscribers 4. With the command line option hdfs dfsadmin -Checkpointinformation