Premium

DataStax Cassandra Developer Certification Certification Questions and Answer (Dumps and Practice Questions)



Question : It is not possible to "tune" Cassandra into a completely CA system.
 : It is not possible to
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :
Explanation: There is a tradeoff between operation latency and consistency: higher consistency incurs higher
latency, lower consistency permits lower latency. You can control latency by tuning consistency.




Question : The consistency level determines the number of replicas that need to acknowledge the read or write operation
success to the client application.
 : The consistency level determines the number of replicas that need to acknowledge the read or write operation
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :
Explanation: The consistency level determines the number of replicas that need to acknowledge the read or write
operation success to the client application. For read operations, the read consistency level specifies how many replicas must
respond to a read request before returning data to the client application. If a read operation reveals inconsistency among
replicas, Cassandra initiates a read repair to update the inconsistent data.




Question : . Even at low consistency levels, Cassandra writes to all replicas of the partition key, including replicas in
other datacenters.
 : . Even at low consistency levels, Cassandra writes to all replicas of the partition key, including replicas in
1. True
2. False

Correct Answer : Get Lastest Questions and Answer :
Explanation: For write operations, the write consistency level specified how many replicas must respond to a write
request before the write is considered successful. Even at low consistency levels, Cassandra writes to all replicas of the
partition key, including replicas in other datacenters. The write consistency level just specifies when the coordinator can
report to the client application that the write operation is considered completed. Write operations will use hinted handoffs
to ensure the writes are completed when replicas are down or otherwise not responsive to the write request.

Typically, a client specifies a consistency level that is less than the replication factor specified by the keyspace. Another
common practice is to write at a consistency level of QUORUM and read at a consistency level of QUORUM. The choices made
depend on the client application's needs, and Cassandra provides maximum flexibility for application design.



Related Questions


Question : Cassandra database tables always be in rd normal form?
 : Cassandra database tables always be in rd normal form?
1. True
2. False


Question : You have been given below structure of data, with Cassandra datatypes

course_id timeuuid
published_date timestamp
category set
title text
trainer text


Following are the table structure

CREATE TABLE HadoopExam (
course_id timeuuid,
published_date timestamp,
category set,
title text,
trainer text
PRIMARY KEY ( XXXX )
) WITH CLUSTERING ORDER BY ( YYYYY);

Now replace the correct value of XXXX and YYYY, to satisfy the below requirement

Retrieve course a trainer has created in (newest first).


 : You have been given below structure of data, with Cassandra datatypes
1. XXXX=((trainer), published_date, course_id) and YYYY= published_date DESC ,course_id ASC

2. XXXX=((course_id), published_date, trainer) and YYYY= trainer DESC ,course_id ASC

3. Access Mostly Uused Products by 50000+ Subscribers

4. XXXX=((course_id), published_date, trainer) and YYYY= published_date DESC ,course_id DESC

5.



Question : You have been given below table definition

CREATE TABLE books_by_category (
category text,
published_date timestamp,
book_id timeuuid,
title text,
PRIMARY KEY ( XXXX)
) WITH CLUSTERING ORDER BY (YYYY );

Now fill in the values in XXXX and YYYY , which satisfies the below query requirement

Retrieve books within a particular category (newest fist).
 : You have been given below table definition
1. XXXX= ((category), book_id, published_date ) and YYYY=published_date DESC , book_id ASC

2. XXXX= ((category), published_date, book_id) and YYYY=published_date DESC , book_id ASC

3. Access Mostly Uused Products by 50000+ Subscribers

4. XXXX= ((category), published_date ) and YYYY=published_date DESC



Question : You have been given a relationship diagram, with to relationship.

E.g. there are two entity Person and Passport

Each person can have only one passport and each passport can belong to one person.

Person : Person_ID (Unique in database)
Passport : Passport Number (Unique in database)

So, which would be the correct keys in this case?
 : You have been given a relationship diagram, with  to  relationship.
1. Key attributes of either participating entity types
2. Key attributes of both participating entity types
3. Access Mostly Uused Products by 50000+ Subscribers
4. Key attribute of the relationship type


Question : . You are uploading your video on HadoopExam training website HadoopExam.com , and you have following
relationship. One user id can upload many videos. However, each video has a unique id. So in this case how do you find unique
key for 1 to n relationship?
 : . You are uploading your video on HadoopExam training website HadoopExam.com , and you have following
1. Key attributes of either participating entity types

2. Key attributes of both participating entity types

3. Access Mostly Uused Products by 50000+ Subscribers

4. Key attribute of the relationship type



Question : You have been writing a technical book, which will be published on HadoopExam website. Now, HadoopExam had
provided many other authors to help you complete your book on time. Hence, we are having here many to many relationship. One
book can be written by multiple author and an author can write multiple books. Now, you need to find unique relation, how
would you find in this case.

You will be having unique author_id and unique book_id

Author(Author_id) -----(m to m) ------ Book (BookId)
 : You have been writing a technical book, which will be published on HadoopExam website. Now, HadoopExam had
1. Key attributes of either participating entity types


2. Key attributes of both participating entity types


3. Access Mostly Uused Products by 50000+ Subscribers


4. Key attribute of the relationship type