cql3 - Spark + Cassandra. Compound key with clustering order problems -


i have c* column family store events-like data. column family created in cql3 in way:

create table event (   hour text,   stamp timeuuid,   values map<text, text>,   primary key (hour, stamp) ) clustering order (stamp desc) 

partitioner murmur3 partitioner. tried build spark query data through calliope library. in results receive 2 problems:

  1. in case there more 1000 records clustering key ('hour' field), response contains first 1000 records per key. can increase page size in query receive more data, far understand must task of paginator go through data , slice it.
  2. i receive each record more once.

about first problem answer calliope author cql3 driver must paginate data. recommends me read datastax article. can't find answer how build query right instructions driver.

about second problem found issue hadoop connector in cassandra < 1.2.11. use c* 2.0.3 , rebuild spark required version of libraries. use calliope version 0.9.0-c2-ea.

could point me documentation or code samples explains right way solve these problems or demonstrate workarounds? suppose use c*-to-spark connector in improper way, can't find solution.

thank in advance.

it's impossible right use non-default sorting clustering keys. works fine sorting order clustering keys default (acs).

workaround modify data-model use compound keys default clustering order.


Comments

Popular posts from this blog

c# - How to get the current UAC mode -

postgresql - Lazarus + Postgres: incomplete startup packet -

javascript - Ajax jqXHR.status==0 fix error -