Giter Club home page Giter Club logo

longevity's Introduction

longevityframework

right now i just want to redirect http://longevityframework.github.io/ to http://longevityframework.github.io/longevity

longevity's People

Contributors

gitter-badger avatar nessus42 avatar sullivan- avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

longevity's Issues

auxiliary primary keys

#108382016 gives the user control over the database key that allows for fast partitioned access in a partitioned database. however, database users sometimes need to resort to constructing a second table if they need an auxiliary path to look up an aggregate quickly. (this secondary table essentially maps from the auxiliary key to the partition key of the main table.) add a PType.auxPrimaryKey construct that does this for the user, instead of them having to create an auxiliary "aggregate" to make this happen.

would want an auxPartitionIndex too..

Support "in place" updates for Cassandra and JDBC back ends

Implement Repo.inplace.updateByKeyVal and Repo.inplace.updateByQuery to do updates "in place", or on the database.

  • See #43 for discussion of Repo.inplace
  • Depends on #44, #46, and #47
  • We may want to split this in two, but seeing as it's probably going to be a while before anyone gets to this, we'll leave it as a fat ticket for now
  • See #45 for hints on the basic shape of the method signatures, implementation clues, and what kinds of integration tests will be necessary

Expose underlying database connections to the users

Let users have a back door to the database to do whatever they need to do. They can always open up another connection themselves; Having two database connections open from the same app to the same database is workable, but less than ideal. (Actually, in SQLite, this probably isn't even workable, as SQLite has single-writer model.)

This is slightly trickier than you might expect, because each back end has their own connection API, and these connection APIs are defined in optional dependencies. So we have to be careful where to expose these methods so that things don't break when the optional dependency is missing. We've handled this kind of thing before. See for example implicit def longevity.persistence.repoToAkkaStreamsRepo.

Probably the best signatures for these methods would be something like:

def mongoClientOpt: Option[com.mongodb.MongoClient]
def mongoDatabaseOpt: Option[com.mongodb.client.MongoDatabase]
def jdbcConnectionOpt: Option[java.sql.Connection]
def cassandraSessionOpt: Option[com.datastax.driver.core.Session]

A couple of notes here:

  • Normally the MongoClient should be sufficient, but in some circumstances, the MongoDatabase instance may be needed. (In case they need to connect to the admin database for some reason.)
  • We could choose to not wrap in Options, and just throw exception if the connection is closed. (See e.g. private lazy val session in MongoRepo.) But it's probably better to wrap in an option than introduce an API method that throws exception. (Note it would be quite inconvenient for, say, Repo.create to return an F[Option[PState[P]]], where the option was None if the connection was closed. So it's less unreasonable for Repo.create to throw a ConnectionClosedException. (This exception is thrown within the effect F, of course.) There are probably some good functional approaches that could avoid exception throwing cleanly here...)
  • It's important that the end user plays nice with the exposed session, i.e., doesn't close it. This would put Repo internal state out of sync. (Again, probably some good functional approaches to get rid of this internal state. But certainly not obvious.) This "plays nice" requirement should be mentioned in the Scaladocs for the method.

In place updates for MongoDB back end

Implement Repo.inplace.updateByKeyVal and Repo.inplace.updateByQuery to do updates "in place", or on the database.

  • See #43 for discussion of Repo.inplace
  • Depends on #44.
  • The first has two arguments: UpdateClause and key value. See Repo.retrieve for how to properly constrain the type of the key value. See how Query is constrained in Repo.queryBy* methods to see how to constrain the UpdateClause.
  • The second has two arguments: UpdateClause and Query.
  • Make it so! Examine how query objects are translated into mongo queries in MongoQuery to get some ideas on how to form the update clause.
  • You should probably write an implementation for InMem back end at the same time; It won't be that hard.
  • Attempting these operations on a JDBC or Cassandra back end can just throw exception for now.
  • Add an integration test for updateByKeyVal in RepoCrudSpec. It should avoid running if the back end is not supported.
  • Add integration tests for updateByQuery in longevity.integration.queries. How to design the tests and just how many we need to cover cases will require some thought. Please try to keep reasonably minimal because these kinds of tests are expensive.

improve error message for query failures

in both Cassandra and SQLite back ends, when the user constructs a query that mentions a property that is not in a key or index, the query fails at runtime. For Cassandra, it fails like this:

com.datastax.driver.core.exceptions.InvalidQueryException: Undefined name prop_char in where clause ('prop_char <= ?')

For SQLite, it fails like this:

Cause: org.sqlite.SQLiteException: [SQLITE_ERROR] SQL error or missing database (no such column: prop_char)

Ideally, we would catch these at compile time. But that seems to be very difficult. Second best would be to catch these errors before the query runs, and throw a longevity exception instead. Satisfactory would be to catch these kinds of errors when they are thrown, and rethrow a longevity exception. But we would have to be careful not to catch exceptions that have other causes than the one we are looking to catch.

Note migration to shapeless would probably allow for catching these at compile time. https://www.pivotaltracker.com/story/show/140864207

initial version Play plugin

This is a pretty open-ended and undeveloped idea. See what opportunities we have for better Play integration by creating a Play plugin.

DSL to describe the "update" clause for an in place update

This is prerequisite for any further updateByKeyVal or updateByQuery work.

  • I would recommend keeping it simple at first.
  • Follow longevity.model.query.QueryFilter as a rough model.
  • Basic structure is <prop> 'gets' <update-expr>
    • Provide both an operator (:= maybe) and a pneumonic equivalent such as gets)
  • It's probably a good idea to restrict it to basic properties for the time being (i.e., things like ints and strings, not things like nested components, sets and lists)
  • The update expression will vary by type; LHS and RHS have to match on type
    • update expression should allow for literals, basic properties, and basic operators over the types such as addition, logical or, string concatenation, etc.
  • Please do write some unit tests to make sure the DSL is fluent
  • When in doubt, ask @sullivan- he will be happy to help

In the future, we might add support for things like:

  • collections and nested components
  • specialized operators such as incr that can either rely on underlying DB operators such as MongoDB's $inc, or be expanded into equivalent expression e.g. x + 1

Add IDEA support for the longevity annotations

IDEA can't expand the longevity macro annotations, and consequently shows error messages for primaryKey and props in an example like this:

@persistent[DomainModel]
case class User(username: Username, email: Email, fullName: FullName)

object User {
  implicit val usernameKey = primaryKey(props.username)
}

For references to User in other source files, while implicit resolution of the generated PEvs and the keys seems to work fine, we still get compiler errors on things like User.props and User.queryDsl (inherited from PType).

It looks like the right way to handle this is to use the "IntelliJ API to build scala macros support": https://blog.jetbrains.com/scala/2015/10/14/intellij-api-to-build-scala-macros-support/

It seems that JetBrains is wisely skipping full-blown support for Scala macros in favor of supporting Scala.meta. See e.g. here:

https://blog.jetbrains.com/scala/2016/11/11/intellij-idea-2016-3-rc-scala-js-scala-meta-and-more/

We plan on migrating longevity from Scala macros to Scala meta as soon as possible. But the Scala.meta feature set is not quite developed enough for our needs as of yet. We're tracking progress on this front here: #37

So barring someone writing an IntelliJ plugin for the longevity macro annotations, we could alternatively wait and see how the situation looks after we migrate longevity to Scala.meta.

Migrate from Scala macros to scalameta

There are some missing features in scalameta that make this impossible at the moment. But it seems like those features that we will need will come around relatively soon. I'm mostly looking at this Semantic API roadmap:

scalameta/scalameta#604

The main thing I think we would need is: functionality to work with symbols: scalameta/scalameta#609

Possibly stuff from this Term.tpe story as well: scalameta/scalameta#611

There is an existing longevity branch, feat/meta, that has the last attempt at migrating to meta. When the next meta release comes out, we should dust off that branch, and see how much further we can get.

Support for binary data

Add support for binary data. This should be relatively easy with the current implementation. Just add some kind of binary type (e.g. an Array[Byte]) to basicTypes. This would in turn need to be stored as base64 in the JSON.

We would probably want to inhibit producing properties for this new basic type, as indexing or querying with <, ==, and > might be a little odd.

Any implementation would probably be affected by migration to shapeless. In particular, we should investigate how circe. I don't see any harm in adding this feature before the migration to shapeless, so long as we convince ourselves that circe will be able to produce the proper JSON first.

Controlled vocabularies not working

I have followed the steps here:
http://longevityframework.org/manual/poly/cv.html

My Model looks like this:

@polyComponent[DomainModel]
sealed trait TicketType

@derivedComponent[DomainModel, TicketType]
case object Congratulation extends TicketType

@derivedComponent[DomainModel, TicketType]
case object Observation extends TicketType

@persistent[DomainModel]
case class User(
  username: Username,
  email: Email,
  fullName: FullName,
  ticketType: TicketType
)

When trying to persist a user, I am presented with the following error:

[error] Exception in thread "main" java.lang.ExceptionInInitializerError
[error] 	at blockingApplication$.delayedEndpoint$blockingApplication$1(blockingApplication.scala:10)
[error] 	at blockingApplication$delayedInit$body.apply(blockingApplication.scala:3)
[error] 	at scala.Function0.apply$mcV$sp(Function0.scala:34)
[error] 	at scala.Function0.apply$mcV$sp$(Function0.scala:34)
[error] 	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
[error] 	at scala.App.$anonfun$main$1$adapted(App.scala:76)
[error] 	at scala.collection.immutable.List.foreach(List.scala:378)
[error] 	at scala.App.main(App.scala:76)
[error] 	at scala.App.main$(App.scala:74)
[error] 	at blockingApplication$.main(blockingApplication.scala:3)
[error] 	at blockingApplication.main(blockingApplication.scala)
[error] Caused by: java.util.NoSuchElementException: None.get
[error] 	at scala.None$.get(Option.scala:349)
[error] 	at scala.None$.get(Option.scala:347)
[error] 	at typekey.TypeKeyMap.apply(TypeKeyMap.scala:64)
[error] 	at longevity.model.ModelType$$anon$3.apply(ModelType.scala:154)
[error] 	at longevity.model.ModelType$$anon$3.apply(ModelType.scala:152)
[error] 	at typekey.BaseTypeBoundMap.mapValue$1(BaseTypeBoundMap.scala:68)
[error] 	at typekey.BaseTypeBoundMap.$anonfun$mapValuesUnderlying$1(BaseTypeBoundMap.scala:71)
[error] 	at scala.collection.MapLike$MappedValues.$anonfun$foreach$3(MapLike.scala:253)
[error] 	at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
[error] 	at scala.collection.Iterator.foreach(Iterator.scala:929)
[error] 	at scala.collection.Iterator.foreach$(Iterator.scala:929)
[error] 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1406)
[error] 	at scala.collection.IterableLike.foreach(IterableLike.scala:71)
[error] 	at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
[error] 	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
[error] 	at scala.collection.immutable.HashMap$HashMapCollision1.foreach(HashMap.scala:283)
[error] 	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
[error] 	at scala.collection.MapLike$MappedValues.foreach(MapLike.scala:253)
[error] 	at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:157)
[error] 	at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:155)
[error] 	at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
[error] 	at scala.collection.TraversableOnce.$div$colon(TraversableOnce.scala:151)
[error] 	at scala.collection.TraversableOnce.$div$colon$(TraversableOnce.scala:151)
[error] 	at scala.collection.AbstractTraversable.$div$colon(Traversable.scala:104)
[error] 	at scala.collection.immutable.MapLike.$plus$plus(MapLike.scala:88)
[error] 	at scala.collection.immutable.MapLike.$plus$plus$(MapLike.scala:87)
[error] 	at scala.collection.immutable.AbstractMap.$plus$plus(Map.scala:216)
[error] 	at typekey.TypeKeyMap.$plus$plus(TypeKeyMap.scala:134)
[error] 	at longevity.emblem.emblematic.Emblematic.reflectives$lzycompute(Emblematic.scala:22)
[error] 	at longevity.emblem.emblematic.Emblematic.reflectives(Emblematic.scala:22)
[error] 	at longevity.emblem.emblematic.EmblematicPropPath$.lookupReflective$1(EmblematicPropPath.scala:97)
[error] 	at longevity.emblem.emblematic.EmblematicPropPath$.unbounded(EmblematicPropPath.scala:116)
[error] 	at longevity.model.realized.RealizedProp$.validatePath(RealizedProp.scala:97)
[error] 	at longevity.model.realized.RealizedProp$.apply(RealizedProp.scala:82)
[error] 	at longevity.model.realized.RealizedPType.pair$1(RealizedPType.scala:34)
[error] 	at longevity.model.realized.RealizedPType.$anonfun$myRealizedProps$1(RealizedPType.scala:35)
[error] 	at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:157)
[error] 	at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:157)
[error] 	at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:320)
[error] 	at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:976)
[error] 	at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:157)
[error] 	at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:155)
[error] 	at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
[error] 	at longevity.model.realized.RealizedPType.<init>(RealizedPType.scala:33)
[error] 	at longevity.model.ModelType.addPair$1(ModelType.scala:87)
[error] 	at longevity.model.ModelType.$anonfun$realizedPTypes$1(ModelType.scala:90)
[error] 	at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:157)
[error] 	at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:157)
[error] 	at scala.collection.Iterator.foreach(Iterator.scala:929)
[error] 	at scala.collection.Iterator.foreach$(Iterator.scala:929)
[error] 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1406)
[error] 	at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:210)
[error] 	at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:157)
[error] 	at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:155)
[error] 	at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
[error] 	at longevity.model.ModelType.<init>(ModelType.scala:75)
[error] 	at domainModel.DomainModel$modelType$.<init>(domainModel.scala:9)
[error] 	at domainModel.DomainModel$modelType$.<clinit>(domainModel.scala)
[error] 	... 11 more

add support for monix streams

If you are interested, please just ask me about it. I can provide tons of support to you in getting this implemented.

compile-time safety for cassandra queries

Right now, a number of queries will throw exception on cassandra. For instance, a query with an OR clause. Here's my idea for making this typesafe:

  • add a type parameter to LongevityContext and Repo for the back end. Those types already exist in longevity.config.BackEnd. We will probably need to pull this as a configuration var at the same time. If so, we probably want to move that class hierarchy too - perhaps into longevity.context? Or into a new longevity.backend package.
  • Create a type class OrSupport[_ <: BackEnd]. Give it a private[longevity] constructor so people can't create their own.
  • Create implicit OrSupport[MongoDB], etc., for the back ends that support or queries. Put them where they will be found by implicit search - probably in the OrSupport companion object would be best.
  • Add an implicit OrSupport[BE] argument for the "or" query methods - both in the query ADT and the query DSL.
  • Repeat for other query clauses that don't work on Cassandra.
  • Don't forget to delete the cassandra specific query exception classes

enforce key constraints on create & update

  • turn on key enforcement via config
  • for primary keys and mongo, this should already be handled by basic schema generation.
  • for non-primary keys, mongo can enforce key constraint over a single partition with basic schema.
    • if we can know we are not sharded, then mongo enforces the key
  • for all cassandra keys, we need to manage this concern ourselves.
  • for non-partitioned keys and full enforcement, we need to manage this concern ourselves.
  • we might be able to work it into the insert/update commands, but if not, we just have to select the key before insert/update.

In place deletes

Add methods Repo.deleteByKeyVal and Repo.deleteByQuery to delete lots of rows "in place", i.e., without loading them from the database first.

Perhaps we should put these in Repo.inplace.deleteByKeyVal and Repo.inplace.deleteByQuery so as to keep the Repo API from getting too cluttered. In this picture we would have a val Repo.inplace with type InPlaceUpdates or some such. Note that in place methods updateByKeyVal and updateByQuery described in another ticket.

  • Implement in all four back ends (Cassandra, JDBC, InMem, MongoDB).
  • Add integration test for deleteByKeyVal in RepoCrudSpec.
  • Add integration tests for deleteByQuery in longevity.integration.queries. It may take a little bit of thought and discussion to come up with a minimal set of tests here to cover cases.

Support a "No JSON" storage format for Cassandra

All fields will have to be realized. JSON removed. We probably want to support both this and the JSON format, e.g., as separate back ends. Your going to need to use user-defined types, sets, lists, and frozen collections here. This is a prerequisite for in place operations like updateByKeyVal in Cassandra.

Support query aggregation

Support for query operations such as sum, ave, and groupBy. Something like:

// sketch

sealed trait AggregationOp
case object Sum extends AggregationOp
case object Ave extends AggregationOp
// ...

case class Aggregation[P, A](query: Query[P], op: AggregationOp, prop: Prop[P, A])
// add DSL for aggregation clauses
// we might need to add a tighter type here for queries that don't have offset/limit clauses

// add method to Repo something like this:
def aggregate[P](aggregation: Aggregation[P]): Future[A]

case class GroupBy[P, A, B](agg: Aggregation[P, A], prop: Prop[P, B]): Future[Seq[(A, B)]]
// of course it would be better if we could group multiple properties at once. this would require
// some migration to shapeless to preserve a list of B types.
// see https://www.pivotaltracker.com/story/show/140864207

It would probably be best to hold off on attempting this after the migration to shapeless.

We would have to target all three back ends here. But if you are interested in giving it a try, you can focus on a single back end. Once one is in place, I can help out with getting the other two back ends together. (Of course I am ready to help every step along the way, I just don't want you to be turned off by having to implement in three back ends.)

initial version Spark plugin

This is a pretty open-ended and undeveloped idea. Whatever we can do to support integration with Spark. Probably the easiest thing to do would be to stream into Spark with a Query and a list of properies, to create a Spark DataFrame.

add support for TTL

TTL = time to live

Both MongoDB and Cassandra back ends have this feature. I think it would be worth exposing. Let's do it clean and in a typesafe way so it is compile error to try to use this on a back end that does not support it.

If interested, please ask. I can provide support.

Support "No JSON" format for JDBC back ends

All fields will have to be realized. JSON removed. We probably want to support both this and the JSON format, e.g., as separate back ends. Your going to need to use auxiliary tables to represent sets and lists. This is a prerequisite for operations like updateByKeyVal in JDBC.

Add IDEA support for the longevity annotations

IDEA can't expand the longevity macro annotations, and consequently shows error messages for primaryKey and props in an example like this:

@persistent[DomainModel]
case class User(username: Username, email: Email, fullName: FullName)
object User {
  implicit val usernameKey = primaryKey(props.username)
}

For references to User in other source files, while implicit resolution of the generated PEvs and the keys seems to work fine, we still get compiler errors on things like User.props and User.queryDsl (inherited from PType).

It looks like the right way to handle this is to use the "IntelliJ API to build scala macros support": https://blog.jetbrains.com/scala/2015/10/14/intellij-api-to-build-scala-macros-support/

Another potential alternative is to make modifications/enhancements to longevity API to allow users to specify things a little differently so that IDEA can follow along. I'm thinking along the lines of:

  1. Allow users to say something like this:
@persistent[DomainModel]
case class User(username: Username, email: Email, fullName: FullName)
object User extends PType[Domain, User] {
  implicit val usernameKey = primaryKey(props.username)
}

So the @persistent annotation already extends the User companion object with PType, but we could allow this redundancy easily enough.

  1. Add a PType method something like:
def dprop[A](propName: String): Prop[User, A]

This method finds the right property in the User.props object generated by the @persistent macro. In this case, we could say dprop[Username]("username") instead of props.username. Of course we would lose a lot of type safety guarantees here: this method could throw runtime exception on misnamed or mistyped properties.

It looks like these two changes would allow IDEA users to have workarounds for the build errors. The non-workaround, IDEA API for scala macros approach would be much more desirable, but probably a good deal more work.

Functionalize the test data generator API

Right now, there are two major problems with the longevity.test.TestDataGenerator API. First, it's not functional. We should use a state monad approach to keep track of the current seed. Second, you can't specify the initial seed, which makes reproducible testing harder.

Let's see if we can't remedy both of these problems, while continuing to provide an easy-to-use API.

I took a stab at this in branch named feat/test-data-gen-api, not sure how far I got or if it will still be helpful/relevant.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.