Premium

Cloudera HBase Certification Questions and Answers (Dumps and Practice Questions)



Question :
While storing the values in HBase, how cell size matters..


 :
1. Cell Size, Practical limits to the size of values
2. In general, cell size should not consistently be above 10MB
3. Access Mostly Uused Products by 50000+ Subscribers
4. Both 1 and 2 are wrong

Correct Answer : Get Lastest Questions and Answer :

While storing the data in HBase
Cell size
- Practical limits to the size of values
- In general, cell size should not consistently be above 10MB





Question :

For the large cell size in HBase

 :
1. Increase the block size
2. Increase the maximum region size for the table
3. Access Mostly Uused Products by 50000+ Subscribers
4. All 1,2 and 3 are correct
5. Only 1 and 3 are correct



Correct Answer : Get Lastest Questions and Answer :


Explanation: Cell size
- Practical limits to the size of values
- In general, cell size should not consistently be above 10MB
- For large cell size:
Increase the block size
Increase the maximum region size for the table
Keep the index size reasonable




Question :

In case of Counters Synchronization is done on the RegionServer and not client side..

 :
1. True
2. False




Correct Answer : Get Lastest Questions and Answer :




Related Questions


Question :

Which of the correct syntex to delete all the rows of a table

 :
1. deleteall 't1'
2. truncate 'tablename'
3. deleteall 't1', 'r1'
4. None of the above



Question :

Select the correct syntex for deleting the a column in a row
 :
1. deleteall 't1', 'r1'
2. deleteall 't1'
3. delete 't1', 'r1', 'fam1:c1'
4. none of the above



Question : Select the syntex to delete the entire row of the table


 : Select the syntex to delete the entire row of the table
1. deleteall 't1', 'r1'
2. deleteall 't1'
3. deleteall
4. None of the above




Question : You are working with a Advertising company called Acmeshell, now you have collected more than . million logos and images
of your clients which were stored in HBase, And you have web application where where you retrieve these images.
In which format will your data be returned from an HBase scan?



 : You are working with a Advertising company called Acmeshell, now you have collected more than . million logos and images
1. CLOB
2. BLOB
3. Sequence Files
4. Array of bytes





Question : You have a Software Professional Website called QuickTechie.com where everyday user create new articles. You extract these all articles
from MySQL database to a file called 12012014Articles.txt. In the hadoop shell you fire the following command. Select the correct statement which applies.

hadoop fs -put 12012014Articles.txt /12012014


 : You have a Software Professional Website called QuickTechie.com where everyday user create new articles. You extract these all articles
1. Copies the article txt file 12012014Articles.txt from default HDFS directory into the HDFS directory /hdfs/hive//warehosue/12012014
2. Copies the article txt file 12012014Articles.txt from default HDFS directory into the HDFS directory /hdfs
3. Copies the article txt file 12012014Articles.txt from default HDFS directory into the HDFS directory /hdfs/12012014
4. Copies the article txt file 12012014Articles.txt from local directory into the HDFS directory 12012014




Question : You have downloaded HBase from the Apache distribution and did not change any HDFS settings. Now you have created a setup in which
as soon as new article is committed by a Software Engineer on the website called QuickTechie.com, it will be pushed to HBase. While saving the article
in HBase table, you observed that it first write Write Ahead Log (WAL), what could be the reason?

 : You have downloaded HBase from the Apache distribution and did not change any HDFS settings. Now you have created a setup in which
1. It will cache the data so it can give high read throughput

2. It will cache the data so it can give high write throughput

3. If RegionServer fails before persisting the data to final location, data will be always avaibale and avoid any data loss.
4. It helps the even distribution of data across the all data centers.