Question :What is HBASE? 1. Hbase is separate set of the Java API for Hadoop cluster 2. Hbase is a part of the Apache Hadoop project that provides interface for scanning large amount of data using Hadoop infrastructure 3. Access Mostly Uused Products by 50000+ Subscribers 4. HBase is a part of the Apache Hadoop project that provides a SQL like interface for data processing.
Explanation: Hbase is one of the Hadoop framework projects that allow real time data scans across big data volumes. This is very often used to serve data from a cluster
Question :What is the role of the namenode? 1. Namenode splits big files into smaller blocks and sends them to different datanodes 2. Namenode is responsible for assigning names to each slave node so that they can be identified by the clients 3. Access Mostly Uused Products by 50000+ Subscribers 4. Both 2 and 3 are valid answers
Explanation: The namenode is the "brain" of the Hadoop cluster and responsible for managing the distribution blocks on the system based on the replication policy. The namenode also supplies the specific addresses for the data based on the client requests
Question : What happen if a datanode loses network connection for a few minutes?
1. The namenode will detect that a datanode is not responsive and will start replication of the data from remaining replicas. When datanode comes back online, administrator will need to manually delete the extra replicas 2. All data will be lost on that node. The administrator has to make sure the proper data distribution between nodes 3. Access Mostly Uused Products by 50000+ Subscribers 4. The namenode will detect that a datanode is not responsive and will start replication of the data from remaining replicas. When datanode comes back online, the extra replicas will be deleted
Ans : 4 Exp : The replication factor is actively maintained by the namenode. The namenode monitors the status of all datanodes and keeps track which blocks are located on that node. The moment the datanode is not avaialble it will trigger replication of the data from the existing replicas. However, if the datanode comes back up, overreplicated data will be deleted. Note: the data might be deleted from the original datanode.
Question : What happen if one of the datanodes has much slower CPU? How will it effect the performance of the cluster?
1. The task execution will be as fast as the slowest worker. However, if speculative execution is enabled, the slowest worker will not have such big impact 2. The slowest worker will significantly impact job execution time. It will slow everything down 3. Access Mostly Uused Products by 50000+ Subscribers 4. It depends on the level of priority assigned to the task. All high priority tasks are executed in parallel twice. A slower datanode would therefore be bypassed. If task is not high priority, however, performance will be affected. Ans : 1 Exp : Hadoop was specifically designed to work with commodity hardware. The speculative execution helps to offset the slow workers. The multiple instances of the same task will be created and job tracker will take the first result into consideration and the second instance of the task will be killed
Question :If you have a file M size and replication factor is set to , how many blocks can you find on the cluster that will correspond to that file (assuming the default apache and cloudera configuration)?
1. 3 2. 6 3. Access Mostly Uused Products by 50000+ Subscribers 4. 12 Ans : 2 Exp : Based on the configuration settings the file will be divided into multiple blocks according to the default block size of 64M. 128M / 64M = 2 . Each block will be replicated according to replication factor settings (default 3). 2 * 3 = 6 .
Question : What is replication factor?
1. Replication factor controls how many times the namenode replicates its metadata 2. Replication factor creates multiple copies of the same file to be served to clients 3. Access Mostly Uused Products by 50000+ Subscribers 4. None of these answers are correct. Ans : 3 Exp : Data is replicated in the Hadoop cluster based on the replication factor. The high replication factor guarantees data availability in the event of failure.
Question : How does the Hadoop cluster tolerate datanode failures?
1. Failures are anticipated. When they occur, the jobs are re-executed. 2. Datanodes talk to each other and figure out what need to be re-replicated if one of the nodes goes down 3. Access Mostly Uused Products by 50000+ Subscribers 4. Since Hadoop is design to run on commodity hardware, the datanode failures are expected. Namenode keeps track of all available datanodes and actively maintains replication factor on all data. Ans : 4 Exp : The namenode actively tracks the status of all datanodes and acts immediately if the datanodes become non-responsive. The namenode is the central "brain" of the HDFS and starts replication of the data the moment a disconnect is detected.
Question :Which of the following tool, defines a SQL like language..
Question : Hadoop framework provides a mechanism for copying with machine issues such as faulty configuration or impeding hardware failure. MapReduce detects that one or a number of machines are performing poorly and starts more copies of a map or reduce task. all the task run simulteneously and the task taht finishes first are used. Which term describe this behaviour..
Question : By using hadoop fs -put command to write a 500 MB file using 64 MB blcok, but while the file is half written, can other user read the already written block
1. It will throw an exception 2. File block would be accessible which are already written 3. Access Mostly Uused Products by 50000+ Subscribers 4. Until the whole file is copied nothing can be accessible. Ans :4 Exp : While writing the file of 528MB size using following command hadoop fs -put tragedies_big4 /user/training/shakespeare/ We tried to read the file using following command and output is below. [hadoopexam@localhost ~]$ hadoop fs -cat /user/training/shakespeare/tragedies_big4 cat: "/user/training/shakespeare/tragedies_big4": No such file or directory [hadoopexam@localhost ~]$ hadoop fs -cat /user/training/shakespeare/tragedies_big4 cat: "/user/training/shakespeare/tragedies_big4": No such file or directory [training@localhost ~]$ hadoop fs -cat /user/training/shakespeare/tragedies_big4 cat: "/user/training/shakespeare/tragedies_big4": No such file or directory [training@localhost ~]$ Once the put command finishes then only we are able to "cat" this file.