Let's consider the following entity for all the below examples:
@Table
public class User
{
@PartitionKey
private Long userId;
@Column
private String firstname;
@Column
private String lastname;
public User(){}
public User(Long userId, String firstname, String lastname){...}
//Getters & Setters
}
manager
.crud()
.insert(new User(10L,"John","DOE"))
.execute();
manager
.crud()
.insertJSON("{\"userid\": 10, \"firstname\": \"John\", \"lastname\": \"DOE\"}")
.execute();
manager
.dsl()
.update()
.fromBaseTable()
.firstName().Set("Jonathan")
.where()
.userId().Eq(10L)
.execute();
manager
.dsl()
.update()
.fromBaseTable()
.firstName().Set_FromJSON("\"Jonathan\"")
.where()
.userId().Eq_FromJSON("10")
.execute();
User user = new User(10L, null, null);
manager
.crud()
.delete(user)
.execute();
manager
.crud()
.deleteById(10L)
.execute();
manager
.crud()
.deleteByPartitionKeys(10L)
.execute();
manager
.dsl()
.delete()
.biography()
.fromBaseTable()
.where()
.userId().Eq(10L)
.execute();
manager
.dsl()
.delete()
.biography()
.fromBaseTable()
.where()
.userId().Eq_FromJSON("10")
.execute();
manager
.dsl()
.update()
.fromBaseTable()
.biography.Set(null)
.where()
.userId().Eq(10L)
.execute();
Do not forget that in CQL semantics setting a column to null means deleting it
For all examples in this section, let's consider the following clustered entity representing a tweet line
@Table(table = "lines")
public class TweetLine
{
@PartitionKey
@Column("user_id")
private Long userId;
@ClusteringColumn(1)
@Enumerated
private LineType type;
@ClusteringColumn(value = 2, asc = false) // Sort by descending order
@TimeUUID //Time uuid type in Cassandra
@Column("tweet_id")
private UUID tweetId;
@Column
private String content;
//Getters & Setters
public static enum LineType
{ USERLINE, TIMELINE, FAVORITELINE, MENTIONLINE}
}
Get the last 10 tweets from timeline, starting from tweet with lastUUID
// Generate SELECT * FROM lines WHERE user_id = ? AND (type, tweet_id) < (?,?) AND type >= ?
List<TweetLine> tweets = manager
.dsl()
.select()
.allColumns_FromBaseTable()
.where()
.userId().Eq(10L)
.type_And_tweetId().type_And_tweetId_Lt_And_type_Gte(LineType.TIMELINE, lastUUID, LineType.TIMELINE)
.limit(10)
.getList();
@Table
public class User {
@PartitionKey
private UUID user_id;
...
@Index
@Column
private int age;
...
}
manager.
.indexed()
.select()
.allColumns_FromBaseTable()
.where()
.age().Eq(32)
....
@Table
public class User {
@PartitionKey
private UUID user_id;
...
@SASI(indexMode = IndexMode.CONTAINS, analyzed = true, analyzerClass = Analyzer.NON_TOKENIZING_ANALYZER, normalization = Normalization.LOWERCASE)
@Column
private String name;
@SASI(indexMode = IndexMode.PREFIX, analyzed = false)
@Column
private String country:
@SASI
@Column
private int age;
...
}
manager.
.indexed()
.select()
.allColumns_FromBaseTable()
.where()
.name().Contains("John")
.age().Gte_And_Lte(25, 35)
.country().Eq("USA")
....
@Table
public class User {
@PartitionKey
private UUID user_id;
...
@DSE_Search(fullTextSearchEnabled = true)
@Column
private String name;
@DSE_Search
@Column
private String country:
@DSE_Search
@Column
private int age;
...
}
//Standard usage
manager.
.indexed()
.select()
.allColumns_FromBaseTable()
.where()
.name().Contains("John")
.age().Gte_And_Lte(25, 35)
.country().Eq("USA")
....
//Raw Predicate
manager.
.indexed()
.select()
.allColumns_FromBaseTable()
.where()
.name().RawPredicate("*Jo??y*")
....
//Raw Solr query with OR predicate
manager.
.indexed()
.select()
.allColumns_FromBaseTable()
.where()
.rawSolrQuery("(name:*John* OR login:jdoe*) AND age:[25 TO 35]")
....
Fetch all timeline tweets by batch of 100 tweets
Iterator<TweetLine> iterator = manager
.dsl()
.select()
.allColumns_FromBaseTable()
.where()
.userId().Eq(10L)
.type_And_tweetId().type_And_tweetId_Lt_And_type_Gte(LineType.TIMELINE, lastUUID, LineType.TIMELINE)
.withFetchSize(100) // Fetch Size = 100 for each page
.iterator();
while(iterator.hasNext())
{
TweetLine timelineTweet = iterator.next();
...
}
// Generate DELETE * FROM lines WHERE user_id = ? AND tpe = ?
manager.
.dsl()
.delete()
.allColumns_FromBaseTable()
.where()
.userId().Eq(10L)
.type().Eq(LineType.TIMELINE)
.execute();
// Generate DELETE * FROM lines WHERE user_id = ?
manager.
.crud()
.deleteByPartitionKeys(10L)
.execute();
To declare a JavaBean as UDT
@UDT(keyspace = "...", name = "user_udt")
public class UserUDT
{
@Column
private Long userId;
@Column
private String firstname;
@Column
private String lastname;
//Getters & Setters
}
Then you can re-use the UDT in another entity
@Table
public class Tweet
{
@PartitionKey
@TimeUUID
private UUID id
@Column
private String content;
@Column
@Frozen
private UserUDT author;
//Getters & Setters
}
Please notice that the
@Frozenannotation is mandatory for UDT. Unfrozen UDT is only available for Cassandra 3.6 and after
Achilles annotation processor will generate, for each entity:
EntityClassName_Manager classEntityClassName_AchillesMeta classThe EntityClassName_AchillesMeta class provides the following methods for encoding/decoding:
public T createEntityFrom(Row row): self-explanatorypublic ConsistencyLevel readConsistency(Optional<ConsistencyLevel> runtimeConsistency): retrieve read consistency frompublic ConsistencyLevel writeConsistency(Optional<ConsistencyLevel> runtimeConsistency): retrieve write consistency frompublic ConsistencyLevel serialConsistency(Optional<ConsistencyLevel> runtimeConsistency): retrieve serial consistency frompublic InsertStrategy insertStrategy(): determine insert strategy using static annotation and Achilles globalpublic void triggerInterceptorsForEvent(Event event, T instance) : trigger all registered interceptors for this entityEach meta class contains a public static field for each property. For example, given the following entity:
@Table
public static User {
@PartitionKey
private Long userId;
@Column
private String firstname;
@Column
private String lastname;
@Column
private Set<String> favoriteTags;
...
}
The User_AchillesMeta class will expose the following static property metas:
User_AchillesMeta.userIdUser_AchillesMeta.firstnameUser_AchillesMeta.lastnameUser_AchillesMeta.favoriteTagsEach property meta class will expose:
public VALUETO encodeFromJava(VALUEFROM javaValue): encode the given Java value into CQL-compatible value using the Codec Systempublic VALUEFROM decodeFromGettable(GettableData gettableData): decode the value of the current property from the GettableData object.GettableData is the common interface for com.datastax.driver.core.Row, com.datastax.driver.core.UDTValuecom.datastax.driver.core.TupleValue
final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");
List<TypedMap> rows = userManager
.raw()
.nativeQuery(statement, 100)
.getList();
for(TypedMap row : rows)
{
String firstname = row.getTyped("firstname");
String lastname = row.getTyped("lastname");
...
}
final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");
List<User> users = userManager
.raw()
.typedQueryForSelect(statement, 100)
.getList();
for(User user : user)
{
...
}
final CompletableFuture<Empty> futureInsert = userManager
.crud()
.insert(new User(...))
.executeAsync();
final CompletableFuture<User> futureUser = userManager
.crud()
.findById(10L)
.executeAsync();
final CompletableFuture<Empty> futureDelete = userManager
.crud()
.deleteById(10L)
.executeAsync();
Note: Empty is a singleton enum to avoid returning a CompletableFuture of null
final CompletableFuture<List<TweetLine>> futureTweets = tweetManager
.dsl()
.select()
.allColumns_FromBaseTable()
.where()
.userId().Eq(10L)
.type().Eq(LineType.TIMELINE)
.limit(30)
.getListAsync();
final CompletableFuture<Empty> futureUpdate = userManager
.dsl()
.update()
.fromBaseTable()
.lastname_Set("new lastname")
.where()
.userId().Eq(10L)
.executeAsync();
final CompletableFuture<Empty> futureDelete = tweetManager
.dsl()
.delete()
.allColumns_FromBaseTable()
.where()
.userId().Eq(10L)
.type().Eq(LineType.TIMELINE)
.executeAsync();
final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");
CompletableFuture<List<TypedMap>> futureTypedMaps = userManager
.raw()
.nativeQuery(statement, 100)
.getListAsync();
CompletableFuture<List<User>> futureUsers = userManager
.raw()
.typedQueryForSelect(statement, 100)
.getListAsync();
final ExecutionInfo executionInfo = userManager
.crud()
.insert(new User(...))
.executeWithStats();
final ExecutionInfo executionInfo = userManager
.crud()
.deleteById(10L)
.executeWithStats();
final Tuple2<User, ExecutionInfo> resultWithExecInfo = userManager
.crud()
.findById(10L)
.getWithStats();
final Tuple2<List<TweetLine>, ExecutionInfo> tweetsWithStats = tweetManager
.dsl()
.select()
.allColumns_FromBaseTable()
.where()
.userId().Eq(10L)
.type().Eq(LineType.TIMELINE)
.limit(30)
.getListWithStats();
final ExecutionInfo executionInfo = userManager
.dsl()
.update()
.fromBaseTable()
.lastname_Set("new lastname")
.where()
.userId().Eq(10L)
.executeWithStats();
final ExecutionInfo executionInfo = tweetManager
.dsl()
.delete()
.allColumns_FromBaseTable()
.where()
.userId().Eq(10L)
.type().Eq(LineType.TIMELINE)
.executeWithStats();
final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");
Tuple2<List<TypedMap>, ExecutionInfo> typedMapsWithStats = userManager
.raw()
.nativeQuery(statement, 100)
.getListWithStats();
Tuple2<List<User>, ExecutionInfo> usersWithStats = userManager
.raw()
.typedQueryForSelect(statement, 100)
.getListWithStats();
@Table
@Consistency(read=ConsistencyLevel.ONE, write=ConsistencyLevel.QUORUM, serial = ConsistencyLevel.SERIAL)
public class User
{
...
}
userManager
.crud()
...
.withConsistencyLevel(ConsistencyLevel.QUORUM)
...
userManager
.dsl()
...
.withConsistencyLevel(ConsistencyLevel.QUORUM)
...
@Table
@TTL(1000)
public class User
{
...
}
userManager
.crud()
.insert(...)
...
.usingTimeToLive(10)
...
userManager
.dsl()
.update()
...
.usingTimeToLive(10)
...
userManager
.crud()
.insert(...)
...
.usingTimestamp(new Date().getTime())
...
userManager
.crud()
.deleteById(...)
...
.usingTimestamp(new Date().getTime())
...
userManager
.dsl()
.update()
...
.usingTimestamp(new Date().getTime())
...
userManager
.dsl()
.delete()
...
.usingTimestamp(new Date().getTime())
...
userManager
.crud()
.insert(...)
...
.ifNotExists()
...
userManager
.crud()
.deleteById(...)
...
.ifExists()
...
userManager
.dsl()
.update()
...
.ifExists()
...
userManager
.dsl()
.update()
.fromBaseTable()
.firstName().Set("new firstname")
...
.if_Firstname().Eq("previous_firstname")
...
userManager
.dsl()
.delete()
...
.ifExists()
...
userManager
.dsl()
.delete()
...
.if_Firstname().Eq("previous_firstname")
...
To have tighter control on LWT updates, inserts or deletes, Achilles lets you inject a listener for LWT operations result.
LWTResultListener lwtListener = new LWTResultListener() {
@Override
public void onSuccess() {
// Do something on success
// Default method does NOTHING
}
@Override
public void onError(LWTResult lwtResult) {
//Get type of LWT operation that fails
LWTResult.Operation operation = lwtResult.operation();
// Print out current values
TypedMap currentValues = lwtResult.currentValues();
for(Entry<String,Object> entry: currentValues.entrySet()) {
System.out.println(String.format("%s = %s",entry.getKey(), entry.getValue()));
}
}
};
userManager
.crud()
.insert(new User(...))
.ifNotExists()
.withLWTResultListener(lwtListener)
.execute();
//OR
userManager
.crud()
.insert(new User(...))
.ifNotExists()
.withLWTResultListener(lwtResult -> logger.error("Error : " + lwtResult))
.execute();
@Table(table = "retweet_count")
public class Retweets {
@PartitionKey
@Column("user_id")
private Long userId;
@ClusteringColumn(1)
@Enumerated
private LineType type;
@ClusteringColumn(value = 2, asc = false)
@TimeUUID
@Column("tweet_id")
private UUID tweetId;
@Counter
@Column("direct_retweets")
private Long directRetweets;
@Counter
@Column("total_retweets")
private Long totlRetweets;
//Getters & Setters
}
Once the entity mapping is defined the CRUD API for counter tables
is restricted to deleteById() and deleteByPartitionKeys() methods (no insert()).
To declare a materialized view, use the * annotation:
@MaterializedView(baseEntity = EntitySensor.class, view = "sensor_by_type")
public class ViewSensorByType {
@PartitionKey
@Enumerated
private SensorType type;
@ClusteringColumn(1)
private Long sensorId;
@ClusteringColumn(2)
private Long date;
@Column
private Double value;
...
//Getters & setters
}
@Table(table = "sensor")
public class EntitySensor {
@PartitionKey
private Long sensorId;
@ClusteringColumn
private Long date;
@Enumerated
@Column
private SensorType type;
@Column
private Double value;
...
//Getters & setters
}
The view should reference a base table using the attribute baseEntity. It should also re-use the same
columns that belong to the base table primary key, possibly in a different order.
Achilles will generate only SELECT APIs for those views, UPDATE and DELETE operations are not possible.
See Materialized View Mapping for more details
You can declare the signature of your functions in a class/interface so that Achilles can generate type-safe
API for you to be able to invoke them in the Select DSL API.
For this, use the @FunctionRegistry annotation:
For more details, see Functions Mapping
@FunctionRegistry
public interface MyFunctionRegistry {
Long convertToLong(String longValue);
}
Please note that you'll need to declare your user-defined function by yourself with Cassandra, Achilles only
takes care of the function signature for the code generation, not the function declaration.
You can use the Manager object for simple object mapping
// Execution of custom query
Row row = session.execute(...).one();
User user = userManager.mapFromRow(row);
You can retrieve the native Session and Cluster object from the Manager
Session session = userManager.getNativeSession();
Cluster cluster = userManager.getNativeCluster();
Generating com.datastax.driver.core.BoundStatement
BoundStatement bs = userManager
.crud()
...
.generateAndGetBoundStatement();
BoundStatement bs = userManager
.dsl()
...
.generateAndGetBoundStatement();
Generating query string
String statement = userManager
.crud()
...
.getStatementAsString();
String statement = userManager
.dsl()
...
.getStatementAsString();
Extract raw bound values
List<Object> boundValues = userManager
.crud()
...
.getBoundValues();
List<Object> boundValues = userManager
.dsl()
...
.getBoundValues();
Extract encoded bound values. The encoding relies on Achilles Codec System
List<Object> encodedBoundValues = userManager
.crud()
...
.getEncodedBoundValues();
List<Object> encodedBoundValues = userManager
.dsl()
...
.getEncodedBoundValues();
Normally you define the keyspace/table name statically using the @Table annotation.
However, in a multi-tenant environment, the keyspace/table name is not known ahead of time but only
during runtime. For this, Achilles defines an interface SchemaNameProvider:
public interface SchemaNameProvider {
/**
* Provide keyspace name for entity class
*/
<T> String keyspaceFor(Class<T> entityClass);
/**
* Provide table name for entity class
*/
<T> String tableNameFor(Class<T> entityClass);
}
You can implement this interface and inject the schema name provider at runtime. Both CRUD API and
DSL API accept dynamic binding of schema name:
final SchemaNameProvider dynamicProvider = ...;
userManager
.crud()
...
.withSchemaNameProvider(dynamicProvider)
.execute();
userManager
.dsl()
.select()
...
.from(dynamicProvider)
.where()
...
userManager
.dsl()
.update()
.from(dynamicProvider)
...
.where()
...
userManager
.dsl()
.delete()
...
.from(dynamicProvider)
...
.where()
...
Sometime it is nice to let Achilles generate for you the CREATE TABLE script. To do that:
ACHILLES_DDL_SCRIPT <logger name="ACHILLES_DDL_SCRIPT">
<level value="DEBUG" />
</logger>
Achilles provides a module achilles-schema-generator to help you generate
CQL schema scripts for your entities. More details here
To debug Achilles behavior, you can enable DML statements logging by setting DEBUG level on the logger ACHILLES_DML_STATEMENT
<logger name="ACHILLES_DML_STATEMENT">
<level value="DEBUG" />
</logger>