Premium

DataStax Cassandra Developer Certification Certification Questions and Answer (Dumps and Practice Questions)



Question : You have been given a relationship diagram, with to relationship.

E.g. there are two entity Person and Passport

Each person can have only one passport and each passport can belong to one person.

Person : Person_ID (Unique in database)
Passport : Passport Number (Unique in database)

So, which would be the correct keys in this case?
 : You have been given a relationship diagram, with  to  relationship.
1. Key attributes of either participating entity types
2. Key attributes of both participating entity types
3. Access Mostly Uused Products by 50000+ Subscribers
4. Key attribute of the relationship type

Correct Answer : Get Lastest Questions and Answer :
Explanation: So, in one to one relationship, you can use either side of the keys to uniquely identify relationship
instance.




Question : . You are uploading your video on HadoopExam training website HadoopExam.com , and you have following
relationship. One user id can upload many videos. However, each video has a unique id. So in this case how do you find unique
key for 1 to n relationship?
 : . You are uploading your video on HadoopExam training website HadoopExam.com , and you have following
1. Key attributes of either participating entity types

2. Key attributes of both participating entity types

3. Access Mostly Uused Products by 50000+ Subscribers

4. Key attribute of the relationship type


Correct Answer : Get Lastest Questions and Answer :
Explanation: As we have 1 to many relationship. A user can upload many training videos, hence its id will appear
many times. So only way to uniquely identifying a video is using video id on many side.




Question : You have been writing a technical book, which will be published on HadoopExam website. Now, HadoopExam had
provided many other authors to help you complete your book on time. Hence, we are having here many to many relationship. One
book can be written by multiple author and an author can write multiple books. Now, you need to find unique relation, how
would you find in this case.

You will be having unique author_id and unique book_id

Author(Author_id) -----(m to m) ------ Book (BookId)
 : You have been writing a technical book, which will be published on HadoopExam website. Now, HadoopExam had
1. Key attributes of either participating entity types


2. Key attributes of both participating entity types


3. Access Mostly Uused Products by 50000+ Subscribers


4. Key attribute of the relationship type


Correct Answer : Get Lastest Questions and Answer :
Explanation: In this scenario to identify unique participative key, we need to use key from both the side of
relationship.


Related Questions


Question : When you need to add new node to Cassandra cluster, you have to bring down Cassandra cluster?
 : When you need to add new node to Cassandra cluster, you have to bring down Cassandra cluster?
1. True
2. False


Question : When your driver uses TokenAwarePolicy


 : When your driver uses TokenAwarePolicy
1. Co-ordinator cannot be avoided, each read and write go through co-ordinator node

2. Co-ordinator is not used, each read and write directly go to the specific node, which data belong to.

3. It eliminates a hop for your data

4. 1 and 3

5. 2 and 3


Question : Which of the policy rely on coordinator


 : Which of the policy rely on coordinator
1. RoundRobin

2. DCAwareRoundRobinPolicy

3. TokenAwarePolicy

4. 1 and 2
5. 1 and 3



Question : In your node Cassandra cluster you see, that most of data is going to particular node only and others are
sitting idle. This is known as...



 : In your  node Cassandra cluster you see, that most of data is going to particular node only and others are
1. vnode

2. Bad Partitioning

3. Hot Spot

4. Dump spot



Question : You see, your node Cassandra cluster has a evenly distributed token. But still there is a Hot spot problem, how
can it be solved?
 : You see, your  node Cassandra cluster has a evenly distributed token. But still there is a Hot spot problem, how
1. By adding new node in the cluster.

2. re-arranging the token in cluster

3. with the help of virtual nodes

4. It cannot be solved



Question : What is true, with regards to vnode/virtual node

A. virtual nodes allow us to create individual smaller token ranges per node and it breaks up these ranges across the cluster
B. Before Cassandra 3.0, vnode have 256 ranges per node
C. past Cassandra 3.0, there are much more token ranges per node (more than 256)
D. past Cassandra 3.0, it's much less, and it's configurable by the user (less than 256)

 : What is true, with regards to vnode/virtual node
1. A,B,C
2. B,C,D
3. A,C,D
4. A,B,D
5. A,B,C,D