outworkers / phantom Goto Github PK
View Code? Open in Web Editor NEWSchema safe, type-safe, reactive Scala driver for Cassandra/Datastax Enterprise
Home Page: http://outworkers.github.io/phantom/
License: Apache License 2.0
Schema safe, type-safe, reactive Scala driver for Cassandra/Datastax Enterprise
Home Page: http://outworkers.github.io/phantom/
License: Apache License 2.0
Currently, there is no type safe way of generating a `SELECT COUNT FROM TABLENAME`` CQL statement.
In Cassandra the order in which one specifies the keys in the create table is important. At the moment the order generated by Phantom is random.
Are there any plans to support prepared statements?
Now I am using my own abstraction. Besides simple prepared statements it allows to choose execution context for a Future
.
abstract class ExecutablePreparedStatement(implicit val session: Session, context: ExecutionContext with Executor) {
val query: String
private lazy val statement = session.prepare(query)
def execute(values: java.lang.Object*): Future[ResultSet] = {
val bs = new BoundStatement(statement).bind(values: _*)
statementToFuture(bs)
}
private def statementToFuture(s: Statement)(implicit session: Session): ScalaFuture[ResultSet] = {
val promise = ScalaPromise[ResultSet]()
val future = session.executeAsync(s)
val callback = new FutureCallback[ResultSet] {
def onSuccess(result: ResultSet): Unit = {
promise success result
}
def onFailure(err: Throwable): Unit = {
promise failure err
}
}
Futures.addCallback(future, callback, context)
promise.future
}
}
I had to track down this repo to get my project to build with v 1.2.7. It would have nice that specified somewhere more obvious.
To facilitate wider usage of phantom
, add cross Scala version support for:
`````` 2.10
2.11```
Add support for Counter Columns in phantom, with basic update operations.
Add tests for:
To make sure we pushed the last milestone, we need to test a table with 2 billion records.
It should work with the default Table.fetchEnumerator
method, allowing us to extract desired ranges of items from a 2 billion element queue.
Please advise of how we can best achieve this.
While working on a servlet web app I discovered a bug which I identified as race condition in the CassandraTable initialization code. It effectively prevents parallel instantiation of classes inherited from CassandraTable and as a consequence it makes impossible parallel class loading and static initialization of scala object singletons inherited from it.
I managed to scrap up a minimal example without all the web app details:
Every run on my machine gives me exceptions from the scala-reflect internals like:
In the sources of CassandraTable I have found a possible cause for the error:
it relies on singleton scala.reflect.runtime.currentMirror in it's initialization logic and scala-reflection API is known to be not thread-safe (at least in Scala 2.10.x):
CassandraTable.scala
import scala.reflect.runtime.{ currentMirror => cm, universe => ru }
...
private[this] val instanceMirror = cm.reflect(this)
private[this] val selfType = instanceMirror.symbol.toType
...
I am really new to scala/java so apologies if this is a stupid question, but how should the ExecutorService
created in Manager
be shut down?
Hi there!
Is there a way to evaluate elements of a map in a "Where" clause?... I need to get entries based on a MapColumn content...
Thanks,
Fran.
CQL has support for partition keys formed by more columns, if required we should try make this possible in phantom
As a developer I need to execute a count query like:
Select count(*) from table_name where condition allow filtering
The result of this query should be a long.
Phantom is already enabled as a project on Maven Central. To share our project with the world without crashing our internal Maven repositories, re-add the settings.
When trying to select only the count column the code doesn't compile
The current implementation of CassandraTable
is required to aggregate the inner column definitions in the order they are written.
Using the Java reflection API returns the methods in a random order with no guarantee that any order will be preserved. This leads to schema variations in PrimaryKey
and PartitionKey
columns, effectively causing discrepancies in CompositeKeys
.
Bump the version of the new driver to add support for Static columns.
Hello,
I'm new to phantom, i would like to implement test methods using phantom-test and i don't know how to use it, someone can help me plz
Tks
Mouhcine
I am having trouble including this package into my sbt project, in particular due to missing dependencies for sbt-pgp(0.8.1) and sbt-git(0.6.4). SBT cannot find these with the resolvers that are given in the build files for this repository. I am using scala(2.10.4) with sbt(0.13.5). I got it to work by using github snapshots of the dependencies, changing the sbt versions and then using publish-local. Is there a repository/resolver that I can use instead?
Delete the things and commits/credentials from the git history.
Cassandra offers CounterColumn
definitions to allow for distributed counting. Implement the feature in phantom with basic add subtract functionality.
In the new phantom-example
module, please write concise examples of how to use iterators, including how and when to provide your custom implementations.
Please target this to both open source audience and our own team.
Phantom looks like exactly what we need to upgrade the way we are interacting with Cassandra from Akka actors... unfortunately there is no Scala 2.11 release. Could you please provide one?
If you need any help turning your build into a cross-versioned one, please let me know and I'll submit a PR for it.
The phantom-example
module should cover:
Enumerators
, Iterators
and how to define new ones.I've looked into OR Query support, but i just found the constant, since QueryBuilder does not support it.
Is there any way someone can try to build an SelectWhere.or which just wraps query parts and chains them with OR?
I would also be happy with something like
CassandraTable[T, R] {
or(list: List[SelectWhere[T, R]]): ExecutableQuery
}
I don't want to try to step into the magic too hard :)
thanks you
I cant find a setting on phantom-testing to disable the massive console.log output
e.g.
9042 available
Starting cassandra
14:48:34.974 [pool-6-thread-1] INFO o.a.c.config.YamlConfigurationLoader - Loading settings
Is there a way to disable this?
Hey,
Was getting unresolved dependencies for play-iteratees_2.10;2.2.0 & noticed in the project's build.scala that you you guys are referencing 2.2.0 whereas the mvn central repo has only 2.4.0-M1.
Where are you guys getting the 2.2.0?
The default partial select methods found in com.newzly.phantom.dsl.CassandraTable
are missing unit tests.
Add at least one basic unit test for every one of them.
The current single repository publishing mechanism does not allow for efficient distribution of new releases internally.
Using sbt-multi-release
or custom SBT commands, setup a mechanism to allow publishing to a specific target.
Hi,
It seems that there is no build for Scala 2.11 in maven. When could it be fixed?
There is no point to having an implicit parameter with a default value. It's enough to import the default executor using import com.newzly.phantom.Implicits
.
Remove the default, overrides are done by providing an implicit executor
in the "current scope". That will take precedence over the default one.
Reflection 'getMethods' is used to find all defined columns in CassandraTable. However according to Java documentation: " The elements in the array returned are not sorted and are not in any particular order."
In case of composite primary key: (PartitionKey, PK1, PK2, PK3) it may end up with CREATE TABLE query where primary keys will be defined in wrong order: PRIMARY KEY(PartitionKey, PK3, PK1, PK2) and it breaks all queries.
Currently phantom doesn't allow setting a consistency level for anything other than InsertQuery
. Consistencies can be specified for all crud operations.
I get this exception while running tests with CassandraFlatSpec. Seems like a Cassandra version issue. How can I fix this?
Exception encountered when invoking run on a nested suite - line 1:23 missing '=' at 'EXISTS'
com.datastax.driver.core.exceptions.SyntaxError: line 1:23 missing '=' at 'EXISTS'
at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:258)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:174)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
at com.websudos.phantom.zookeeper.DefaultCassandraManager$$anonfun$initIfNotInited$1.apply(SimpleCassandraConnector.scala:72)
at com.websudos.phantom.zookeeper.DefaultCassandraManager$$anonfun$initIfNotInited$1.apply(SimpleCassandraConnector.scala:70)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.package$.blocking(package.scala:50)
at com.websudos.phantom.zookeeper.DefaultCassandraManager$.initIfNotInited(SimpleCassandraConnector.scala:70)
at com.websudos.phantom.testing.SimpleCassandraTest$class.beforeAll(SimpleCassandraConnector.scala:38)
The current API:
def fetchEnumerator()(implicit session: Session, ctx: ExecutionContext): ScalaFuture[PlayEnumerator[R]]
Enumerator can be considered an analogy to Iterator in reactive/asynchronous space. Therefore wrapping it with Future seems to be redundant. The API doesn't break reactive principles if the function returned simply an Enumerator.
def fetchEnumerator()(implicit session: Session, ctx: ExecutionContext): PlayEnumerator[R]
The return type for all queries should not be ResultSet, as this is not a Phantom class
Hi, I'm trying to store a composite object into cassandra, something like
case class UserAccount(id: UUID, msisdn: Option[String], email: Option[String], personalInfo: UserInfo, settings: AccountSettings)
and I would like to store the UserInfo and AccountSettings into a composite column or in a super column under the same row of the UserAccount CF.
At the moment I'm flattening my structure inside the UserAccount record but would be nice to be able to define a more hierarchical structure
Some things should be logged in debug mode, like the exact query sent to the cassandra cluster. (are very helpful when debugging stuff)
I'm going through the documentation at http://websudos.github.io/phantom and it seems like there's no way to connect to a multi-node Cassandra setup without using zookeeper. Is that correct?
I see it several places in the readme. Do not see a method named useConsistencyLevel anywhere (especially not on an InsertQuery)
Add support for static columns, with basic support for all features of static columns.
Currently, the ThriftSeqColumn
implementation extends Column
with Set[T]
. One allows duplicates, the other one doesn't.
Fix this to avoid any potential confusion and subtle bugs.
Can't post it into google groups as it says that I don't have permission for it, so consider the issue as a feature request:
I could not find any way to use blob
cassandra's data type. Does phantom have it? Or will it?
Phantom needs an easy setup for being added as a dependency in other projects
To replicate, use com.newzly.phantom.tables.Recipes
available in the phantom-test
module.
val items = List("sugar", "spice")
// prepend a list of items
Recipes.where(_.url eqs someUrl).modify(_.ingrediends prependAll items).one()
// check the output
val item = Recipes.select(_.ingredients).where(_.url eqs someUrl).one().sync().get
// the initial set of items is reversed to "spice", "sugar" ...
What is the state of spark integration, I can't find any branch related to spark so I make assumption that it's not started yet.
I'm playing with spark, datastax spark connector and phantom in the same project, and I feel that I understand what I really need. I there is no branch to pick up, I'll try to submit basic proof-of-concept this week.
Currently tests are excluded in the build definition. This is a nasty solution that doesn't treat the underlying problem.
I'm using the published dev release of 1.2.7 for Scala 2.11
I get the following error when extracting the row values
Error: can't extract required value for column 'param_id'
The first column, decoder_key is extracted fine but the second column fails with that error. A toString() on the row reveals all the column data is there and in tact.
CREATE TABLE answers_clustered (
decoder_key text,
param_id bigint,
answers set < int >,
first_answered timestamp,
last_answered timestamp,
expires timestamp,
PRIMARY KEY ( decoder_key, param_id )
);
A developer needs to be able to set the secondary index name, as this is unique per keyspace, and the default value is mapped to the name of the column, which is unique only at the table level.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.