Question-: You have enabled leveled compaction strategy for one of your table and it is there since last months. You have found that the SSTable size is more than GB now. Because of this table is not getting compacted since last days. What would you do in this situation? A. You would change the compaction strategy to SizeTieredCompactionStrategy. B. You would manually break this SSTable in two parts. C. You would be using sstablesplit command for that table. D. you would stop the Cassandra cluster and start it again (means cluster restart).
Answer: C
Explanation: Sometimes what happens when you use compaction strategy like STCS or LTCS, it is possible that SSTable size become very huge. And it cannot be considered for next many compaction cycles. To avoid such type of issues, we need to split the SSTables and which can be done using the sstablesplit command. This command takes input argument, what should be the size for each splitted SSTable and accordingly it can be splitted.
However, keep in mind when “sstablesplit� command is executed, you must stop the Cassandra node.
Question-: You have Cassandra cluster setup, across three geographies. In each datacentre there are nodes. These datacenters are physically quite far from each other. So whenever write request is send, it is good enough to write in the same datacenter where the coordinator node resides. Which of the following consistency level would help for this? A. LOCAL_ONE B. LOAL_QUORUM C. LOCAL_ALL D. EACH E. TWO
Answer: A,B Exp: There are various consistency level defined for the Cassandra cluster, which decides how many acknowledgements are required and from where. To consider the successful write request in the question it is clearly mentioned it is good enough if one of the Node in the datacentres acknowledge the successful write from the coordinator node in the same datacentre resides. There are two consistency level that can help in defining this requirement.
LOCAL_ONE: In this case write request would be successful, if acknowledged by, at least 1 replica nude in the local datacenter. LOCAL_QUORUM: This is one kind of strong consistency, in this case write must be returned to the commit log and memtable on a quorum of replica nodes in the same datacenter as the coordinator node. This helps in avoiding latency of inter datacenter communication.
Question-: If you have Cassandra cluster setup with data centers having replication factor as . Now you start writing the data. Based on the Defined consistency level please map the following.
A. LOCAL_ONE B. LOCAL_QUORUM C. EACH_QUORUM D. QUORUM
1. Write must be succeed on one node in the same data centre where the coordinator node exist. 2. Write must be success on the two nodes in the same data centre where the coordinator node exists. 3. Access Mostly Uused Products by 50000+ Subscribers 4. Over 5 copy of data should be collected,
Answer: A-1,B-2, C-3, D-4
Explanation: You need to understand how the consistency level work when used with the multiple data centres. Keyword quorum means majority of the replica should be created based on the replication factor. It is given in the question that there are in total 9 nodes distributed across the data centres in a single Cassandra cluster. Hence, majority is 5, If you define quorum as a consistency level than 5 copy of the data should be created across all the nodes in the cluster it does not matter which node from which data center.
Now let's talk about the other consistency level.
EACH_QUROUM: in this case each data center should have majority so, two copies of data should be created in each datacenter. Which results in 6 copy across all nodes in the cluster.
LOCAL_QUORUM: in this case majority should be done at the local to the datacenter. Which results into two copy should be created, in the same datacenter where the coordinator node exists.
LOCAL_ONE: In this case on the one copy should be created in the same data center where the coordinator node is existing.