mhgbrown / cached_resource Goto Github PK
View Code? Open in Web Editor NEWCaching for ActiveResource
License: MIT License
Caching for ActiveResource
License: MIT License
As title says.
I can create a PR if this is agreed on.
I am doing my tests with VCR i am wondering if i should/could disable cached resource in tests so i can continue to use VCR. It does not make sense to use both. Any ideas?
When a cached resource is trying to fetch it has_many associations it crashed.
from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/cached_resource-5.0.1/lib/cached_resource/caching.rb:91:in new' from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/cached_resource-5.0.1/lib/cached_resource/caching.rb:91:in
block in cache_read'
from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/activesupport-5.2.0/lib/active_support/core_ext/object/try.rb:16:in try!' from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/activesupport-5.2.0/lib/active_support/core_ext/object/try.rb:8:in
try'
from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/cached_resource-5.0.1/lib/cached_resource/caching.rb:87:in cache_read' from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/cached_resource-5.0.1/lib/cached_resource/caching.rb:36:in
find_via_cache'
from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/cached_resource-5.0.1/lib/cached_resource/caching.rb:23:in find_with_cache' from /home/ptarud/.rvm/gems/ruby-2.4.0/gems/activeresource-5.0.0/lib/active_resource/associations.rb:150:in
block in defines_has_many_finder_method'
I recently updated to 5.1.0 after experiencing the problems that #40 fixed. However, I tried updating an object I had retrieved from the cache, and the persistence field is missing. This means instead of calling update, ActiveResource called create on the object.
I tried calling find with reload => true, but it looks like it writes to the cache and then reads from there, causing the same problem.
Old Behavior (5.0.1):
Cache read: my_active_resource/123
Dalli::Server#connect 127.0.0.1:11211
Cache write: my_active_resource/123 ({:race_condition_ttl=>86400, :expires_in=>604800})
[cached_resource] WRITE my_active_resource/123
Cache read: my_active_resource/123
[cached_resource] READ my_active_resource/123
=> #<MyActiveResource:0x007fb59e0ba8c8 @attributes={"id"=>123}, @prefix_options={}, @persisted=true>
2.3.1 :005 > m.persisted?
=> true
New behavior:
Cache read: my_active_resource/123
Dalli::Server#connect 127.0.0.1:11211
Cache write: my_active_resource/123 ({:race_condition_ttl=>86400, :expires_in=>604800})
[cached_resource] WRITE my_active_resource/123
Cache read: my_active_resource/123
[cached_resource] READ my_active_resource/123
=> #<MyActiveResource:0x007fab706a1f58 @attributes={"id"=>123 }, @prefix_options={}, @persisted=false>
2.3.1 :002 > m.persisted?
=> false
Is there a way to keep persistence? Or stop reading directly from the cache in instances where I want to update?
Just a curious question, would there be support for that with this gem?
Example:
MyActiveResource.where(name: 'bob').where(last_name: 'the builder')
throws an error because the previous params { name: 'bob' }
is not assigned to the original_params
https://github.com/rails/activeresource/blob/df28dee231f8573c18d9e091c7fd7bc6d6679263/lib/active_resource/base.rb#L1097
I can also create PR if its agreed this should be fixed.
It would be great to use this gem even in more recent Rails versions - in my usecase Rails 7.
Currently, the user is made to wait for the caching mechanism to process it's write and read before things are returned.
cache_collection_synchronize(object, *arguments) if cached_resource.collection_synchronize
cache_write(key, object)
cache_read(key)
Could we wrap that around a new thread, so that the thread would process the caching, but return the retrieve object after the thread have been spawn?
This way, users need not waiting for the three lines of code to be process, this would reduce the response time by a great amount. This is especially so if collection_synchronize is true.
This is a problem when re-creating an object that is actually a nested json from the cache.
If the class does not exist, dalli would throw an error stating that it does not understand the constant.
For example
[{
a: 1
b: [{
c: 5
d: 6
e: 7
},
{
c: 4
d: 5
e: 6
}]
},
{
a: 2
b: [{
c: 5
d: 6
e: 7
},
{
c: 4
d: 5
e: 6
}]
}]
When gem tries to re-instantiate the a::b object, it would throw an error if that class does not exist.
Probably use hashie or ostruct could solve the problem @ https://github.com/mhgbrown/cached_resource/blob/master/lib/cached_resource/caching.rb#L90.
@mhgbrown No issue at all, just wanted thank you, this "just works", it's awesome! Kudos, have a beer 🍺
We're running into a problem where we need to start using different Ruby versions and different bundler versions to properly test all permutations locally. This is easy to configure with travis ci, but less so locally. No more does one Ruby version and one Bundler version suffice for all dependency options. For example:
I have a use case where we need to clear out the entire cache after each request to make sure the next one is fresh.
I'm finishing this up right now -- just wanted to get it into the next release.
Hey! I'm not sure if there is already way to to this but didn't find anything at documentations.
If I have an cached collection like:
objects = Object.find(:all)
And I create a new object object = Object.create(params)
it would be nice if I was able to add that newly created object into cached collection called objects.
Would be nice to have a way to call clear_cache on an instance, and have it clear the cache for that instance. That way the next time it is loaded, it will force a reload, rather then being reloaded now if don't need it.
The abstract class ActiveSupport::Cache::Store
prescribes several options, one of which is race_condition_ttl
. However, when calling
cached_resource cache: Activesupport::Cache::MemoryStore.new(race_condition_ttl: 10)
in class context, that option remains without effect. This is because the underlying cache does not know about a resource of the same kind is about to be retrieved already, and therefore fires a new request as soon as somebody tries to pull the resource from the cache again. This can be prevented by using ActiveSupport::Cache::Store#fetch
instead of #write
. #write
cannot yield.
In general I do not think it to be a good idea to take the decision "to reload or not to reload" away from the underlying cache object and implement the decision making yourself with #read
and #write
. I am preparing a PR to fix the said deficiency, implementing the usage of #fetch
. Please, let me know what you think.
Hello.
I've encountered with this when I've tried to cache an Active Resource model that includes an array as a field. The error is:
"undefined class/module Place::Child"
And the resource json response is:
[
{
"id" : 1,
"name" : "Chile",
"children": [
{
"id": 3,
"name": "Metropolitana"
},
{
"id": 4,
"name": "Valparaíso"
}
]
},
{
"id" : 2,
"name" : "Colombia",
"children": [
{
"id": 5,
"name": "Bogotá"
},
{
"id": 6,
"name": "Cartagena de Indias"
}
]
}
]
The "children" field is recognized as a class, not just as a simple field.
Is there a way to avoid this?
Are we handling this?
Could be added as something in the configuration?
http://guides.rubyonrails.org/caching_with_rails.html#activesupport-cache-store
In Rails 3.1.3 the MemoryCache (and perhaps others) inexplicably freezes the stored object on read:
cache = ActiveSupport::Cache::MemoryStore.new
s = ""
s.frozen? # false
cache.write(:s, s)
s.frozen? # false
cache.read(:s)
s.frozen? # true
This manifests itself when an a resource is retrieved twice and the first one is altered:
a1 = Account.find(1) # From remote service
a1.frozen? # false
a2 = Account.find(1) # From cache
a1.frozen? # true
a1.name = "Checking" # Raises TypeError: can't modify frozen object
A simple workaround would be to dup the object before storing. Updated line 95 of cached_resource/caching:
result = cached_resource.cache.write(key, object.dup, :expires_in => cached_resource.generate_ttl)
This issue appears to have been fixed in Rails 3.2, so this change is only necessary for backwards compatibility.
Configuration options set for ActiveResource::Base will not be set for its descendants.
When running a clear_cache, the ^
does not seem to supported
In Redis cache store, the pattern used is sent to scan
https://github.com/rails/rails/blob/main/activesupport/lib/active_support/cache/redis_cache_store.rb#L198
Trying to rerun it as close as possible to redis, we notice the ^
blocks retrieval of keys
Example:
r.scan(0, :match => "deal/*", count: 100)
=>
["476",
["deal/product/all/{:params=>{:universe_id=>2,:q=>{:published_eq=>true}}}",
"deal/category/5/relevant-products-for-newborn",
"deal/brand/25",
"deal/product/all/{:params=>{:category_id=>6,:q=>{:published_eq=>true}}}",
"deal/product/all/{:params=>{:category_id=>11,:q=>{:published_eq=>true}}}",
"deal/universe/all",
"deal/product/all/{:params=>{:category_id=>4,:q=>{:published_eq=>true}}}",
"deal/category/11/relevant-products-for-newborn",
"deal/brand/8",
"deal/product/all/{:params=>{:category_id=>15,:q=>{:published_eq=>true}}}",
"deal/brand/all/{:params=>{:q=>{:published_eq=>true,:random=>true}}}",
"deal/product/all/{:params=>{:category_id=>9,:q=>{:published_eq=>true}}}",
"deal/brand/21",
"deal/brand/4",
"deal/brand/13",
"deal/brand/16",
"deal/brand/12"]]
> r.scan(0, :match => "^deal/*", count: 100)
=> ["476", []]
It is easier to delete keys when we have an established pattern, ergo a prefix
. This feature is to add into configuration a prefix to the cache keys.
This is on rails 3.2.12, standard API-backed ActiveResource fails when calling #all.
Any ideas why this might be happening?
Hello! I have problems with cached_resource because when a resource is not found, the cache is updated with the error. For example when execute MyActiveResource.first
, this return http status code 404 and cache is updated with nil. If i run again MyActiveResource.first
, the cache return nil. Is it correct that before an error the cache is updated?
RSpecs are using old DSL/syntax
Hello.
ActiveRessource 5.1.0 has a breaking change.
In 5.0.0 the find method with all, first and last scope returns an empty array if no resource is found. In 5.1.0 it returns nil
, which leads to a NoMethodError
in caching.rb, line 141. nil
has no persited?
method.
Using cached_resource gem for caching active resources.
User model
class User < ActiveResource::Base
cached_resource
class teachers < SimpleDelegator
attr_accessor :teacher_id
def initialize(attributes = {}, _persisted = true)
@teacher_id = attributes['teacher_id']
super(User.find(@teacher_id))
end
end
end
I am trying to cache user resources.
/users/:user_id
Whenever i am calling /users/:user_id endpoint it gives me error singleton can't be dumped at line super(User.find(@teacher_id))
Please let me know if any other info is required. I am stucked with it, please help me out.
Firstly, thanks for the gem.
Is it possible to configure caching so that, in the event of the remote server being unavailable, cached values of the data can still used after they become stale?
We are using redis as our cache store. Using clear_cache
deletes everything in redis, even sidekiq data.
...so that he can push updates to the gem as well.
@Daniel-ltw — what email shall I use to make you an owner of the gem? [email protected]?
Looks like find
calls are intercepted but not where
calls. Is this by design?
As the title said, I think we shouldn't cache nil
or []
responses.
I can create a PR for this if you agree.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.