Premium

Datastax Cassandra Administrator Certification Questions and Answer (Pratice Questions and Dumps)



Question-: You have enabled leveled compaction strategy for one of your table and it is there since last months. You have found that the SSTable size is more than GB now. Because of this table is not getting compacted since last days. What would you do in this situation?
A. You would change the compaction strategy to SizeTieredCompactionStrategy.
B. You would manually break this SSTable in two parts.
C. You would be using sstablesplit command for that table.
D. you would stop the Cassandra cluster and start it again (means cluster restart).

Answer: C

Explanation: Sometimes what happens when you use compaction strategy like STCS or LTCS, it is possible that SSTable size become very huge. And it cannot be considered for next many compaction cycles. To avoid such type of issues, we need to split the SSTables and which can be done using the sstablesplit command. This command takes input argument, what should be the size for each splitted SSTable and accordingly it can be splitted.

However, keep in mind when “sstablesplit� command is executed, you must stop the Cassandra node.



Question-: You have Cassandra cluster setup, across three geographies. In each datacentre there are nodes. These datacenters are physically quite far from each other. So whenever write request is send, it is good enough to write in the same datacenter where the coordinator node resides. Which of the following consistency level would help for this?
A. LOCAL_ONE
B. LOAL_QUORUM
C. LOCAL_ALL
D. EACH
E. TWO

Answer: A,B
Exp: There are various consistency level defined for the Cassandra cluster, which decides how many acknowledgements are required and from where. To consider the successful write request in the question it is clearly mentioned it is good enough if one of the Node in the datacentres acknowledge the successful write from the coordinator node in the same datacentre resides. There are two consistency level that can help in defining this requirement.

LOCAL_ONE: In this case write request would be successful, if acknowledged by, at least 1 replica nude in the local datacenter.
LOCAL_QUORUM: This is one kind of strong consistency, in this case write must be returned to the commit log and memtable on a quorum of replica nodes in the same datacenter as the coordinator node. This helps in avoiding latency of inter datacenter communication.



Question-: If you have Cassandra cluster setup with data centers having replication factor as . Now you start writing the data. Based on the Defined consistency level please map the following.

A. LOCAL_ONE
B. LOCAL_QUORUM
C. EACH_QUORUM
D. QUORUM

1. Write must be succeed on one node in the same data centre where the coordinator node exist.
2. Write must be success on the two nodes in the same data centre where the coordinator node exists.
3. Access Mostly Uused Products by 50000+ Subscribers
4. Over 5 copy of data should be collected,


Answer: A-1,B-2, C-3, D-4

Explanation: You need to understand how the consistency level work when used with the multiple data centres. Keyword quorum means majority of the replica should be created based on the replication factor. It is given in the question that there are in total 9 nodes distributed across the data centres in a single Cassandra cluster. Hence, majority is 5, If you define quorum as a consistency level than 5 copy of the data should be created across all the nodes in the cluster it does not matter which node from which data center.

Now let's talk about the other consistency level.

EACH_QUROUM: in this case each data center should have majority so, two copies of data should be created in each datacenter. Which results in 6 copy across all nodes in the cluster.

LOCAL_QUORUM: in this case majority should be done at the local to the datacenter. Which results into two copy should be created, in the same datacenter where the coordinator node exists.

LOCAL_ONE: In this case on the one copy should be created in the same data center where the coordinator node is existing.

Related Questions


Question-: In your organization network team is changing the topology, hence they have to change the IP address for one of the nodes in Cassandra cluster. This setup is not using the PropertyFileSnitch as “endpoint_snitch�. What all steps you need to follow for that?
A. Run the “nodetool drain� command
B. Update the Cassandra.yaml file with the new IP details
C. Stop and then start the Cassandra on this node
D. Initiate the rolling start of the Cassandra cluster


Question-: You have just setup the cluster and not made it operational yet. Now you found that, you need to switch the snitch for a particular node. For switching the snitch you always have to setup the Cassandra cluster from scratch?
A. True
B. False


Question-: You have an node Cassandra cluster, currently setup in the single datacenter. You need to move nodes from the cluster in new datacenter, which would be part of the same cluster having nodes in each datacenter. Which of the following you would be doing at the minimum?
A. Alter the keyspace replication settings to reflect the two datacenters.
B. Once data is replicated to the new datacenter, remove the number of nodes from the original datacenter that have moved to new datacenter.
C. Decommission 4 nodes from the cluster
D. Use the “nodetool assassinate� command on the 4 nodes which you would be moving


Question-: You are trying to insert the data into a table, which is part of the keyspace that uses the NetworkTopologyStrategy and not able to do so, what could be the possible resgin?
A. Cassandra version on each node is not correct
B. Table has reached the defined size
C. You have not created partition key on the table
D. You have not defined the datacenter names in the snitch properties file


Question-: You are using the “NetworkTopologyStrategy� for the replication for your keyspace. However, you want to change the replication for this keyspace as below
ALTER KEYSPACE HE_KEYSPACE WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc_he_1' : 3, 'dc_he_2' : 2};
Which of the following nodetool command you should run after this change?
A. “nodetool decommission�
B. “nodetool rebalance�
C. “nodetool repair –full HE_KEYSPACE�
D. “nodetool tpstas HE_KEYSPACE�
E. “nodetool netstats HE_KEYSPACE�


Question-: You have created a Cassandra cluster which has name “HadoopExamCluster�, you want to rename this cluster with the “QuickTechieCluster�. How can you do that?
A. You will be updating the Cassandra.yaml file
B. You will be updating the jvm.option file
C. You would be updating Snitch Property file.
D. You can not do that