Giter Club home page Giter Club logo

scalacache's Introduction

ScalaCache

Join the chat at https://gitter.im/cb372/scalacache

Build Status Maven Central

A facade for the most popular cache implementations, with a simple, idiomatic Scala API.

Use ScalaCache to add caching to any Scala app with the minimum of fuss.

The following cache implementations are supported, and it's easy to plugin your own implementation:

Documentation

Documentation is available on the ScalaCache website.

Compatibility

ScalaCache is available for Scala 2.11.x, 2.12.x, and 2.13.x.

The JVM must be Java 8 or newer.

Compiling the documentation

To make a change to the documentation:

  1. Make sure that memcached is running on localhost:11211
  2. Perform the desired changes
  3. Run sbt doc/makeMicrosite

scalacache's People

Contributors

augi avatar bvisserbb avatar catap avatar cb372 avatar fabriziofortino avatar gitter-badger avatar golem131 avatar gurinderu avatar irasmivan avatar jan0sch avatar jleider avatar jmlamodiere avatar kubukoz avatar lewisjkl avatar licentious avatar lloydmeta avatar mayman avatar mchv avatar mdedetrich avatar mergify[bot] avatar nicl avatar ploddi avatar ronnnnnnnnnnnnn avatar scala-steward avatar seratch avatar stephennancekivell avatar taylorwood avatar tototoshi avatar yadavan88 avatar yurique avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scalacache's Issues

sync.caching is not deterministic

When getting value from cache, it seems there is possibility that cannot be getting value from cache.
For example

import scalacache.sync

def getValue(key: String): String = sync.caching(key) { ... }

val a = getValue("key1")
val b = getValue("key1") // this code may not get value from cache.

So I read cache's code. Apparenly, putting value to cache is done asynchronously. So getting value from cache is not deterministic. Is this intended behavior?

Doesn't work well with Squeryl?

I'm trying to integrate ScalaCache with a project that uses Squeryl with no luck.

The following code compiles without memoize, but it doesn't with memoize

  def foo(userId: Int): Option[User] = memoize {
    inTransaction {
      from(users)((u) =>
        select(u)
      )
    }.headOption
  }

The error message is here:

> compile
[info] Compiling 1 Scala source to /path/to/project/target/scala-2.11/classes...
[trace] Stack trace suppressed: run last project/compile:compile for the full output.
[error] (project/compile:compile) scala.reflect.internal.Types$TypeError: value <none> is not a member of scala.runtime.AbstractFunction0[Option[some.package.User]] with Serializable

Here's the definition of User and Schema in case you need it:

class User(
  val id: Int = 0,
  val name: String,
  val status: Int,
  @Column("created_at")
  var createdAt: Date
) extends KeyedEntity[Int] {
  def this() = {
    this(0, "", 0, new Date())
  }

  def this(name: String, status: Int) = {
    this(0, name, status, new Date())
  }

  def this(name: String, status: UserStatus) = {
    this(0, name, status.dbval, new Date())
  }
}

object FooDb extends Schema {
  val users = table[entities.User]("user")
}

Thanks.

Fix up build.sbt

I just did a clone of scalacache, and I am getting this error when running sbt at this point

lazy val deployToMavenCentral = ReleaseStep(action = releaseTask(sonatypeReleaseAll))

Seems like its relying on a non committed file for release reasons?

Flags is ignored?

Hi,

I've been using this library for quite some time, and it's been helping me a lot. Thanks for that.

I'm not sure if I understand how to use Flags correctly, but this might be a bug, so I'm reporting it. Basically, in the following example, I'm expecting the variable user1after to be a fresh value, not the cached one.

case class User(id: Int, name: String)

object Test {

  import scalacache._
  import scalacache.Flags
  import memoization._
  import guava._

  import concurrent.duration._
  import scala.language.postfixOps

  import java.util.Date

  implicit val scalaCache = ScalaCache(GuavaCache())

  def generateNewName() = {
    Thread.sleep(500)
    val newName = (new Date).getTime.toString
    println(s"New name = $newName")
    newName
  }

  def getUser(id: Int): User = memoize {
    User(id, generateNewName())
  }

  def getUserWithTtl(id: Int): User = memoize(1 days) {
    User(id, generateNewName())
  }

  def testReads() = {
    val user1before = getUser(1)
    val user1after = {
      implicit val flags = Flags(readsEnabled = false)
      getUser(1)
    }
    assert (user1before != user1after, s"testReads: readsEnabled = false was ignored? $user1before, $user1after")
  }

  def testReadsWithTtl() = {
    val user1before = getUserWithTtl(1)
    val user1after = {
      implicit val flags = Flags(readsEnabled = false)
      getUserWithTtl(1)
    }
    assert (user1before != user1after, s"testReadsWithTtl: readsEnabled = false was ignored? $user1before, $user1after")
  }

  def main(args: Array[String]): Unit = {
    testReads
    testReadsWithTtl
  }
}

Thanks in advance.

Optimise all the things

As we discovered in #82, ScalaCache has quite a high overhead compared to accessing the underlying cache directly. Let's see what we can do to improve this.

MDC doesn't work for the debug logs in memoizeSync calls

I am trying to use MDC with the help of the article below to trace the flow of each request in my application.

http://code.hootsuite.com/logging-contextual-info-in-an-asynchronous-scala-application/

It sets some tags in threadlocal and requires customized ExecutionContexts so that the tags can be passed between multiple threads. It means if any future task uses the default global ExecutionContext, the logs in that future task will miss those tags.

I'm using GuavaCache and found that those tags are missing in GuavaCache's debug logs.

It looks like even for sync calls, Futures and the global ExecutionContext is used with no possibility to override it. My guess is it's done for reusing code for both sync and async approaches.

Reference:

val future = self.cachingForMemoize(baseKey)(ttl)(Future.successful(f))(flags, ExecutionContext.global)

How can this be solved?

Would you allow passing custom ExecutionContext for sync calls as well or probably redesign the sync calls?

Detect Parameters when using implicit class

Assuming you want to memoize some function thats defined inside an implicit class, i.e.

implicit class(u:User) {
  def someCachedFunc(int:Int) = memoize {.....}
}

The macro that scala cache uses to generate a unique key for that function will miss the u:User parameter, which means the cache wont work properly if you do something like this

val u:User // some instance of user
val u2:User // another separate instance of user
u.someCachedFunc(3)
u2.someCachedFunc(3)
// Both will return the same result, even though u can be completely different to u2, because they happen to have the same key since the u parameter is missing.

It would be handy if we could detect such cases, pretty sure its possible for Scala macros to do that

Start looking at/building for Scala 2.12?

Scala 2.12 is already in M3, and a lot of major libraries are cross building/doing releases against it. Would be good if Scala Cache did the same, as it would iron out any potential compatibility release before Scala 2.12 hits an actual release

ScalaCache requires Guava Cache[String, Object]

Guava's CacheBuilder does not require that the resulting cache be Cache[String, Object], but as far as I can see, ScalaCache does not support any other value type for a custom CacheBuilder. Here is some sample code that compiles:

  import com.google.common.cache
  import com.google.common.cache._
  import java.util.concurrent.TimeUnit
  import scalacache._
  import scalacache.guava._

  val underlyingGuavaCache: cache.Cache[String, Object] = CacheBuilder.newBuilder()
    .concurrencyLevel(4)
    .softValues()
    .expireAfterWrite(5, TimeUnit.MINUTES)
    .build[String, Object]
  implicit val scalaCache = ScalaCache(GuavaCache(underlyingGuavaCache))

... and this is an example of what I'd like to be able to do:

  import com.google.common.cache
  import com.google.common.cache._
  import java.util.concurrent.TimeUnit
  import scalacache._
  import scalacache.guava._

  val underlyingGuavaCache: cache.Cache[String, String] = CacheBuilder.newBuilder()
    .concurrencyLevel(4)
    .softValues()
    .expireAfterWrite(5, TimeUnit.MINUTES)
    .build[String, String]
  implicit val scalaCache = ScalaCache(GuavaCache(underlyingGuavaCache))

Is there a way to accomplish this?

java.lang.IncompatibleClassChangeError with JDK8, SBT 0.13.7 and scala 2.11.5

Using sbt 0.13.7, the JDK 8 on Max OSX and scala 2.11.5

$ java -version
java version "1.8.0_31"
Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)

I'm getting

[error] Exception in thread "main" java.lang.IncompatibleClassChangeError: Implementing class
[error]     at java.lang.ClassLoader.defineClass1(Native Method)
[error]     at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
[error]     at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
[error]     at java.net.URLClassLoader.defineClass(URLClassLoader.java:455)
[error]     at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
[error]     at java.net.URLClassLoader$1.run(URLClassLoader.java:367)
[error]     at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
[error]     at java.security.AccessController.doPrivileged(Native Method)
[error]     at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
[error]     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[error]     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
[error]     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
[error]     at java.lang.ClassLoader.defineClass1(Native Method)
[error]     at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
[error]     at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
[error]     at java.net.URLClassLoader.defineClass(URLClassLoader.java:455)
[error]     at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
[error]     at java.net.URLClassLoader$1.run(URLClassLoader.java:367)
[error]     at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
[error]     at java.security.AccessController.doPrivileged(Native Method)
[error]     at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
[error]     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[error]     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
[error]     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
[error]     at scalacache.redis.RedisCache$.apply(RedisCache.scala:83)
[error]     at scalacache.redis.RedisCache$.apply(RedisCache.scala:77)
[...]

using the simple code

 import scalacache._
 import redis._
 implicit val scalaCache = ScalaCache(RedisCache("host1", 6379))

Is it normal for the moment? Its a backward compatibility issue? i have already try to clean before compile

Question: Disable caching temporarily

I'm currently evaluating some caching libraries that can be used in Scala and yours looks really useful.

Now, I have a question. Is it possible to disable caching temporarily depending on the context? What I want to achieve is, basically, I want to cache results from the database, but in some cases I need to fetch the latest data directly from the database.

It would be nice if I could do like the following:

object FooDao {
  def getUser(id: Int): User = memoize {
    ...
  }
}

object FooModel {
  def bar = {
    implicit val disableCaching = ...
    val user = FooDao.getUser(1)
  }
  def baz = {
    implicit val scalaCache = ScalaCache(new MyCache())
    val user = FooDao.getUser(1)
  }
}

Thanks!

Call to `cachingWithTTL` (via macro `memoizeSync`) never returns...

Apologies if I'm doing something wrong, but I can't imagine what. It appears as though the (private) _cachingSync method is awaiting (via Await.result) a Future that either doesn't exist or was attached to a thread that has somehow disappeared or died off.

I first notice this problem when my main thread locks up -- I see no further log messages from my application. I've allowed it to stay in this state for up to (give or take) 24 hours. A jstack dump (see attached) appears to show the main thread locked up, waiting for the cache to come back, but there doesn't appear to be any other thread busy "looking" for the respective value (to be cached).

The method that uses the cache looks like this, where Session refers to a Slick database session/connection:

  def find(mid: String)(implicit s: Session): Option[Merchant] =
    memoizeSync(10.minutes) {
      // Query the database here...
    }

(In the attached jstack dump, I've simply clipped the irrelevant portion of the main thread's stack-trace.)

I am using the Guava back-end and my ScalaCache object is initialized as such:

implicit val scalaCache = scalacache.ScalaCache(scalacache.guava.GuavaCache())

jstack.txt

Cross Compile for Scala.js

The intention would essentially be to cross compile Scala-js for both the JVM and Scala.js (and maybe in the future, Scala Native!)

In terms of general design, this should be fairly easy (considering that we use macros). The main issue I see now is how the code is organised in terms of a project structure. I see a possible solution as follows

  • scalacache-core becomes its own project and is cross compiled for scala-js/jvm
  • The BaseCodecs (https://github.com/cb372/scalacache/blob/ee82acd128e3c9c984fe72e3d18b76c4f74c4266/core/src/main/scala/scalacache/serialization/BaseCodecs.scala) gets moved into its own project
    • Reason why is because there isn't really a corresponding byte type in Javascript. Although there is interopt with Byte (see https://www.scala-js.org/doc/interoperability/types.html) it will have terrible performance due to the semantics of Javascript treating it as an integer.
    • From a design POV basecodecs doesn't make too much sense being part of scalacore since the core shouldn't make assumptions on the type of serialization
    • Note that this isn't mandatory, its just that having the byte serializers can be considered deceiving when taking into account Javascript (for the reason stated earlier)
    • Some codecs (such as memcached) will depend off scalacache-basecodecs (pulling in the scalacache-core library transitively) rather than depending off scalacache-core directly. This ofcourse depends on the implementation of the scalacache module
  • All of the scalacache modules are untouched (apart from changing their dependencies to point to the new scalacore/scalacore-basecodecs)
  • Find a good implementation of a eviction based cache for Javascript, maybe something like https://github.com/monsur/jscache to start off with. Again this is not mandatory, but it would be nice to demonstrate a usable Scala.js cache.

Note that none of this should have any ramifications for end users, its mainly infrastructure changes.

Thoughts? I would also be willing to do the work if the author has no issues with it

Make memcached ops non-blocking

Currently we're just wrapping Spymemcached in Future { ... }, but we can use Spy's callbacks to complete a Promise, a la Shade.

if i want the true result from get(key), should i use Await? (guava cache)

I'm having a bit of a hard time converting the result of the future cache get to an object that the calling method can read and use. In this case its a Map.

I can use Await.result(futureVal, Duration(1,"seconds") which works wonderfully, while this is more of a future question about getting the direct result from a future into a variable, i'm wondering what the canonical way to do this is with ScalaCache. Await, to me seems better than the sync calls (which...have they been removed, don't see them in the code, but they are in the readme?), since i can recover and repopulate if empty.

BTW, i think the wrapper library for all these caches is really nice.

Thanks for your contribution.

Support async access in memoize

Currently the memoize can only be applied to some function f that return the object that we want to cache, like the example in the demonstrated:

def getUser(id: Int): User = memoize {
  // Do DB lookup here...
  User(id, s"user${id}")
}

But the case is that most of the time we do the DB lookup in a async manner.
That is to say, we do the DB lookup in a async way, so the return type of the getUser function is usually Future[User], rather than User. It will be great if this can be supported.

sbt 0.13.12

This involves replacing project/Build.scala with build.sbt

Document TTL precedence

Current docs/readme is not clear on TTL precedence.

E.g. if in underlying Caffeine cache I specify expireAfterWrite (basically TTL for entries), and I do not specify TTL in calls to put or in used memoize variants, will entries expire after underlying cache TTL or not?

Also, please consider starting a (gitter, slack, ...) chat room where community can answer questions like these.

Create an annotation to ignore parameters for key generation

Note sure if this is possible, but it would be fantastic if scala cache provided an annotation (lets say @ignoreCache) which would allow you to tell the memoize function to ignore certain parameters when it generates a key.

It would be a lot less verbose than having to create a MethodCallToStringConverter, and its a very common use case we have (i.e. we have database connections as parameters, which we really don't want to be part of the cache key, and writing a seperate implementation of MethodCallToStringConverter for every single method gets tedious ).

Was thinking of something along these lines

def doSomething(userId:UserId)(implicit @ignoreCache databaseConnect:DatabaseConnection) = memoize {..}

The @ignoreCache in this case would mean that the key for doSomething will only depend on the userId.

Futures Are Always Created for Local Cache Accesses

Local cache accesses generally return a result very quickly, especially for low concurrency values for Guava. Creating and invoking an ExecutionContext requires a context switch, which has non-trivial overhead. I do not believe that wrapping a computation in a Future provides any benefit in this case.

Instead of calling Await.result() for every synchronous request, the opposite approach would handle all use cases better: synchronous requests should be performed without a Future, and asynch results should wrap synchronous calls in a Future.

Support custom serialization

Currently hardcoded to use Java serialization for both Redis and Memcached. Might be nice to give people some options, e.g. JSON, Mgspack

scala-redis 2.11 version

Might be fastest to send a PR to scala-redis to add 2.11 cross-building.

The project doesn't seem to be actively maintained anymore. Any other Redis clients out there yet?

Documentation: Flags.readsEnabled invalidates the cache

Hi,

I've been using Scalacache for some time. Thank you for the great library.

I faced a situation where our system needs to invalidate the cached result of a memoized method. I forked the code and was going to implement it, but then noticed that Flags.readsEnabled = false does that. (Correct me if I'm wrong). I thought it just skips cache reads temporarily, and doesn't affect the cached result.

Let me explain my misunderstanding using the getUser example for clarification.

saveToDb(User(1, "Prince"))
val user1a = getUser(1, false) // From the DB
val user1b = getUser(1, false) // Cached result
saveToDb(User(1, "The Artist ...."))
val user1c = getUser(1, true) // From the DB
val user1d = getUser(1, false) // Cached result

What I thought was:

  • user1a = user1b = user1d = Prince
  • user1c =The Artist ...

The real behavior (as far as I see from the code) is:

  • user1a = user1b = Prince
  • user1c = user1d = The Artist ...

Maybe because I'm not an English speaker, the sentence "Cache GETs and/or PUTs can be temporarily disabled using flags" made me misunderstand the behavior. It would be helpful for someone like me to add a little note about this.

By the way, I tested it by adding the following code to Issue42Spec:

  "memoize with readsEnabled" should "invalidate cache" in {
    val user1a = getUser(1)
    val user1b = getUser(1)
    val user1c = {
      implicit val flags = Flags(readsEnabled = false)
      getUser(1)
    }
    val user1d = getUser(1)
    user1a should not be user1c
    user1a should be(user1b)
    user1c should be(user1d)
  }

This test case fails 8 out of 10 times on my machine. A sort of timing issue?

Thank you.

ClassNotFoundException when deserializing List[T] from redis

I encountered ClassNotFoundException when using memoize.
I'm using scalacache-redis 0.5.2.

This script is minimal example to reproduce the error.

case class User(id: Int, name: String)

object Test {

  import scalacache._
  import memoization._
  import redis._

  implicit val scalaCache = ScalaCache(RedisCache("localhost", 6379))

  // Works fine when the result type is User, but throws Exception when List[User]
  def getUser(id: Int): List[User] = memoize {
    List(User(id, "Taro"))
  }

  def main(args: Array[String]): Unit = {
    getUser(1)
  }

}
scalacachedebug> r
[info] Running Test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[error] (run-main-3a) java.lang.ClassNotFoundException: User
java.lang.ClassNotFoundException: User
        at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:344)
        at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:626)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:477)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1896)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at scalacache.redis.RedisSerialization$class.deserialize(RedisSerialization.scala:57)
        at scalacache.redis.RedisCache.deserialize(RedisCache.scala:13)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(RedisCache.scala:30)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(RedisCache.scala:30)
        at scala.Option.map(Option.scala:146)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1.apply(RedisCache.scala:30)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1.apply(RedisCache.scala:28)
        at scala.concurrent.impl.ExecutionContextImpl$DefaultThreadFactory$$anon$2$$anon$4.block(ExecutionContextImpl.scala:48)
        at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640)
        at scala.concurrent.impl.ExecutionContextImpl$DefaultThreadFactory$$anon$2.blockOn(ExecutionContextImpl.scala:45)
        at scala.concurrent.package$.blocking(package.scala:123)
        at scalacache.redis.RedisCache$$anonfun$get$1.apply(RedisCache.scala:28)
        at scalacache.redis.RedisCache$$anonfun$get$1.apply(RedisCache.scala:28)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[trace] Stack trace suppressed: run last compile:run for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
        at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 0 s, completed 2015/03/08 22:19:23
127.0.0.1:6379> keys *
1) "Test.getUser(1)"
127.0.0.1:6379> get Test.getUser(1)
"\x05\xac\xed\x00\x05sr\x002scala.collection.immutable.List$SerializationProxy\x00\x00\x00\x00\x00\x00\x00\x01\x03\x00\x00xpsr\x00\x04Userx\xae\xbf\rY%\x97\xf6\x02\x00\x02I\x00\x02idL\x00\x04namet\x00\x12Ljava/lang/String;xp\x00\x00\x00\x01t\x00\x04Tarosr\x00,scala.collection.immutable.ListSerializeEnd$\x8a\\c[\xf7S\x0bm\x02\x00\x00xpx"

Alternate names or prefixes for `get`, `put`, and so on

I already have get in scope as a function for getting the value in a state monad (and the inverse for put). Is there any way that I can have different names for these functions? Can I just rename the imports? Something like import scalacache.{put => cache, get => lookup}?

Codec needed to cache (Seq of) custom types in Caffeine cache

After upgrading from 0.7.5 to 0.8.0 of scalacache, compilation of

import scalacache._
import caffeine._

implicit val scalaCache = ScalaCache(CaffeineCache())

def baz(): Seq[foo.Bar] = ...

def logic(): [foo.Bar] = sync.cachingWithTTL(key)(ttl)(baz)

fails at cachingWithTTL call with

Could not find any Codecs for type Seq[foo.Bar]. Please provide one or import scalacache._

Caffeine is in-memory cache - why is codec i.e. serialization/deserialization needed at all?

Make a website

The readme is getting pretty unwieldy. Would be good to make it easier to navigate, and maybe split it into several pages.

Goals, in order of priority:

  1. Ease of creation and maintenance
  2. Look pretty

An argument named "key" to memoized method doesn't compile

The following code compiles:

  def foo(x: Int): Int = memoize {
    x + 1
  }

But this doesn't:

  def foo(key: Int): Int = memoize {
    key + 1
  }

The error message is shown below:

[error] /path/to/Foo.scala:79: recursive value key needs type
[error]   def foo(key: Int): Int = memoize {
[error]                                    ^
[error] one error found

Not a big deal, but it would be nice if you could add a note to "Troubleshooting/Restrictions" section.

Thanks.

Support Duration.Inf

If Duration.Inf is passed as a TTL, ScalaCache should take it to mean "no TTL".

Time to live type mismatch

Hey, I'm following the ttl example from the README and it results in a compiler error.

  import scalacache._
  import scalacache.guava._
  import concurrent.duration._

  implicit val scalaCache = ScalaCache(GuavaCache())

  // Add an item to the cache
  put("myKey")("myValue") // returns a Future[Unit]

  // Add an item to the cache with a Time To Live
  put("otherKey")("otherValue", ttl = 10.seconds)
type mismatch;
 found   : scala.concurrent.duration.FiniteDuration
 required: Option[scala.concurrent.duration.Duration]
  put("otherKey")("otherValue", ttl = 10.seconds)

It seems that I can only get it to work if I do one of the following two things:

implicit def duration2Opt(duration: FiniteDuration): Option[Duration] = Some(duration)

or

put("otherKey")("otherValue", ttl = Some(10.seconds))

Am I missing something to get it to work without needing to provide a implicit conversion or wrap the duration in a Some?

Thanks.

memize throws InvalidClassException when cached class has changed

memoize throws java.io.InvalidClassException when the definition of a cached class has changed since it was cached. The version is 0.6.0.

How to reproduce

Here are the steps to reproduce the issue (I borrow the code from issue 32):

  • Compile and run the following code
case class User(id: Int, name: String)

object Test {

  import scalacache._
  import memoization._
  import redis._

  implicit val scalaCache = ScalaCache(RedisCache("localcache", 6379))

  def getUser(id: Int): User = memoize {
    User(id, "Taro")
  }

  def main(args: Array[String]): Unit = {
    getUser(1)
  }

}
  • Make a small change to User class like the following:
case class User(id: Int, name: String, foo: String = "")
  • Compile and run the code again.

Stacktrace

Below is the error that I got:

[error] (run-main-2) java.io.InvalidClassException: User; local class incompatible: stream classdesc serialVersionUID = 869609799455012$
486, local class serialVersionUID = -576101910684720136
java.io.InvalidClassException: User; local class incompatible: stream classdesc serialVersionUID = 8696097994550122486, local class ser$
alVersionUID = -576101910684720136
        at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
        at scalacache.redis.RedisSerialization$class.deserialize(RedisSerialization.scala:59)
        at scalacache.redis.RedisCache.deserialize(RedisCache.scala:15)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$1.apply(RedisCache.scala:33)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1$$anonfun$apply$2$$anonfun$1.apply(RedisCache.scala:33)
        at scala.Option.map(Option.scala:146)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1$$anonfun$apply$2.apply(RedisCache.scala:33)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1$$anonfun$apply$2.apply(RedisCache.scala:31)
        at scalacache.redis.RedisCache.scalacache$redis$RedisCache$$withJedisClient(RedisCache.scala:81)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1.apply(RedisCache.scala:31)
        at scalacache.redis.RedisCache$$anonfun$get$1$$anonfun$apply$1.apply(RedisCache.scala:31)
        at scala.concurrent.impl.ExecutionContextImpl$DefaultThreadFactory$$anon$2$$anon$4.block(ExecutionContextImpl.scala:48)
        at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640)
        at scala.concurrent.impl.ExecutionContextImpl$DefaultThreadFactory$$anon$2.blockOn(ExecutionContextImpl.scala:45)
        at scala.concurrent.package$.blocking(package.scala:123)
        at scalacache.redis.RedisCache$$anonfun$get$1.apply(RedisCache.scala:30)
        at scalacache.redis.RedisCache$$anonfun$get$1.apply(RedisCache.scala:30)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Thanks.

Benchmarking

Would be interesting to see how much overhead ScalaCache adds vs using a cache implementation directly.

JMH?

memoize, can we disabling cache depends of the result ?

Hello,

I want to disable cache if the result of the function is not OK for me.

in this exemple :

def getUser(id: Int): Future[Try[User]] = memoize {
    ExternalApi.call() map {
      Data =>  Success(User(Data))
   } recover {
     case ex => Failure(ex)
   }
 }

i dont want to save in cache the exception. is there a simple way to do it?

thanks

Should call jedisPool.destroy

Hi,

Summary

I'm having a problem that a program runs out of connections to Redis, and guessing that it might be caused by scalacache-redis.

I'm not 100% sure, but I think one of the three options below needs to be taken:

  1. Use jedisPool.returnResource(jedis) instead of jedis.close()
  2. Make RedisCache#jedisPool public, so client code can call JedisPool#destroy
  3. Instantiate plain Jedis every time instead of getting one from the pool

Details

jedis.close is a problem?

Here is a sample code from the Jedis wiki:

Jedis jedis = null;
try {
  jedis = pool.getResource();
  /// ... do stuff here ... for example
  jedis.set("foo", "bar");
  String foobar = jedis.get("foo");
  jedis.zadd("sose", 0, "car"); jedis.zadd("sose", 0, "bike"); 
  Set<String> sose = jedis.zrange("sose", 0, -1);
} finally {
  if (jedis != null) {
    jedis.close();
  }
}
/// ... when closing your application:
pool.destroy();

I also took a brief look at Jedis source code, and it seems Jedis#close actually closes the connection. The sample code above calls pool.destroy() at the end, so it doesn't matter, but I don't think client code should close a connection taken from the connection pool (in general, no?)

This is an old thread, so I'm not sure this is still valid now,
https://groups.google.com/forum/?fromgroups=#!topic/jedis_redis/UeOhezUd1tQ

but it says:

Returning the resource to the Pool is important, so remember to do it. Otherwise when closing your app it'll wait for the resource to return.

The sample code and wiki page recommend two different ways, and they are kinda contradicting, but in any case, I guess calling only Jedis#close isn't enough.

Also, another Jedis wiki page says the following:

Forgetting pool.destroy keeps the connection open until timeout is reached.

My conclusion

Now that we've got two options.

Returning the resource to the Pool is important, so remember to do it. Otherwise when closing your app it'll wait for the resource to return.

If the above sentence is still valid, the option below is the way to go:

Use jedisPool.returnResource(jedis) instead of jedis.close()

If we want to follow the sample doc, we want to call JedisPool#destroy, so the following is an option:

Make RedisCache#jedisPool public, so client code can call JedisPool#destroy

We also have an option of not using connection pooling, and sticking to plain Jedis, in which case, Jedis#close should be a valid way when you finish using the connection.

Thanks.

EDIT:
Forgot to mention that sedis uses returnResourceObject, which is a bit different from returnResource.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.