Premium

Cloudera HBase Certification Questions and Answers (Dumps and Practice Questions)



Question : Which staement is correct for versioning

 :  Which staement is correct for versioning
1. The versions are sorted by their timestamp in ascending order
2. The versions are sorted by their timestamp in descending order
3. By default HBase maintains unlimited number of versions
4. Only 1 and 3 are correct
5. Only 2 and 3 are correct


Correct Answer : 2

Table cells are the intersection of a row and column
A {row, column, version} tuple specifies a cell in HBase
Cell content contains uninterpreted bytes
Cells are versioned
Unlimited number of versions of a cell
Version is specified using a long integer
By default, the version is current time - epoch (12pm, 01/01/70)
Versions are stored in decreasing order

By default, HBase keeps three versions of a row
The versions are sorted by their timestamp (in descending order)







Question : When get and scan operation is performed on the HBase, then by default it will return

 : When get and scan operation is performed on the HBase, then by default it will return
1. Cell with largest value for version is returned
2. Cell with smallest value for version is returned
3. It return any random value from the all versions
4. None of the above is correct


Correct Answer : 1

Get or Scan to retrieve data
By default, cell with largest value for version is returned
Get.setMaxVersions() to return more than one version
Get.SetTimeRange() to return versions other than the latest






Question : To retrive the last three versions of the row, which operation needs to be performed



 : To retrive the last three versions of the row, which operation needs to be performed
1. get.setMaxVersions(3);
2. get.setMaxVersions("three");
3. get.setAllVersions(3);
4. htable.get(get);
5. Bydefault it always return latest three versions only




Correct Answers: 1

To return the last three version following code needs to be followd

Get get = new Get(Bytes.toBytes("row1"));
get.setMaxVersions(3);
Result r = htable.get(get);

And to return the current version of the row
Get get = new Get(Bytes.toBytes("row1"));
Result r = htable.get(get);
byte[] b = r.getValue(Bytes.toBytes("cf"), Bytes.toBytes("attr"));




Related Questions


Question : You are working with a Advertising company called Acmeshell, now you have collected more than . million logos and images
of your clients which were stored in HBase, And you have web application where where you retrieve these images.
In which format will your data be returned from an HBase scan?



 : You are working with a Advertising company called Acmeshell, now you have collected more than . million logos and images
1. CLOB
2. BLOB
3. Sequence Files
4. Array of bytes





Question : You have a Software Professional Website called QuickTechie.com where everyday user create new articles. You extract these all articles
from MySQL database to a file called 12012014Articles.txt. In the hadoop shell you fire the following command. Select the correct statement which applies.

hadoop fs -put 12012014Articles.txt /12012014


 : You have a Software Professional Website called QuickTechie.com where everyday user create new articles. You extract these all articles
1. Copies the article txt file 12012014Articles.txt from default HDFS directory into the HDFS directory /hdfs/hive//warehosue/12012014
2. Copies the article txt file 12012014Articles.txt from default HDFS directory into the HDFS directory /hdfs
3. Copies the article txt file 12012014Articles.txt from default HDFS directory into the HDFS directory /hdfs/12012014
4. Copies the article txt file 12012014Articles.txt from local directory into the HDFS directory 12012014




Question : You have downloaded HBase from the Apache distribution and did not change any HDFS settings. Now you have created a setup in which
as soon as new article is committed by a Software Engineer on the website called QuickTechie.com, it will be pushed to HBase. While saving the article
in HBase table, you observed that it first write Write Ahead Log (WAL), what could be the reason?

 : You have downloaded HBase from the Apache distribution and did not change any HDFS settings. Now you have created a setup in which
1. It will cache the data so it can give high read throughput

2. It will cache the data so it can give high write throughput

3. If RegionServer fails before persisting the data to final location, data will be always avaibale and avoid any data loss.
4. It helps the even distribution of data across the all data centers.




Question : There is a feature provided in QuickTechie.com website that any Software Professional can create an article as well as can update and delete the article.
You decided to use HBase rather than HDFS to store this article. What would be the reason, you preferred the HBase over HDFS.



 : There is a feature provided in QuickTechie.com website that any Software Professional can create an article as well as can update and delete the article.
1. Fault tolerance
2. Batch processing
3. Random writes
4. Even Distribution of Data.




Question : All the software professionals who are subscriber at QuickTechie.com created their profile, as an administrator you also store the joining date of
the profile. Full History of all the users and their profile is being stored in HBase for further analysis. Now one of the data scientist wants to fire ad-hoc
query to fetch the Joining date of one of bad profiling who is publishing adult content on the website. In order to fetch the data from a cell (Joining Date),
you need to supply HBase with which of the following?
 : All the software professionals who are subscriber at QuickTechie.com created their profile, as an administrator you also store the joining date of
1. A row key, column family and column qualifier
2. A row key, column qualifier and version

3. A column key
4. A column key and column qualifier





Question : There is a feature provided in QuickTechie.com website that any Software Professional can create an article as well as can update and delete the article.
You decided to use HBase rather than HDFS to store this article. You need to create a ARTICLES table in HBase. The table will consist of a one Column Family called
PROFILE_ARTICLES and two column qualifiers, USER and COMMENT. Select the correct command which will create this table:
 : There is a feature provided in QuickTechie.com website that any Software Professional can create an article as well as can update and delete the article.
1. create 'ARTICLES', {NAME => 'Author', NAME =>'Comment'}

2. create 'ARTICLES', 'PROFILE_ARTICLES:Author', 'PROFILE_ARTICLES:Comment'

3. create 'ARTICLES', 'PROFILE_ARTICLES' {NAME => 'Author', NAME => 'Comment'}

4. create 'ARTICLES', 'PROFILE_ARTICLES'