Question : In which case Hbase should not be used ?
1. When you only append data to your dataset and read the whole data 2. When you need random read 3. When you need random write 4. When access pattern is well known
Correct Answer : 1 HBase is a good fit when You need random write, random read, or both (but not neither) You need to do many thousands of operations per second on multiple TB of data Your access patterns are well-known and simple
HBase is not a good fit when You only append to your dataset, and tend to read the whole thing You primarily do ad-hoc analytics (ill-defined access patterns) Your data easily fits on one beefy node
Question :
One technical architech designing a solution for storing huge data volume which is everyday generated by stock markets and its purpose is only storing these data and reading it once for daily analysis and he comes to conclusion that, he will use HBase. So he took a right decision
1. Yes 2. No
Correct Answer : 2
Explanation: HBase is not good fit for just appending the data and reading it back as whole thing. In this case he should consider using HDFS file system only to store the data and reading it back from the files as a whole.
Question :
Please select the correct answer for HBase ?
1. In HBase every row has a Row Key 2. All columns in HBase are belongs to a particular column family 3. A table can have one or more column families 4. Table cells are versioned 5. All of the above
Correct Answer : 5
HBase data models has following features Tables are made of rows and columns Every row has a row key (analogous to a primary key)
Rows are stored sorted by row key for fast lookups
All columns in HBase belong to a particular column family
A table may have one or more column families Common to have a small number of column families Column families should rarely change A column family can have any number of columns
Table cells are versioned, uninterpreted arrays of bytes
1. The versions are sorted by their timestamp in ascending order 2. The versions are sorted by their timestamp in descending order 3. By default HBase maintains unlimited number of versions 4. Only 1 and 3 are correct 5. Only 2 and 3 are correct
1. Cell with largest value for version is returned 2. Cell with smallest value for version is returned 3. It return any random value from the all versions 4. None of the above is correct
1. Deleted data is not immediately removed 2. Delete creates a tombstone marker 3. Tombstones masks the deleted values 4. Data is removed at major compaction 5. All of the above