aio-libs / aiocache Goto Github PK
View Code? Open in Web Editor NEWAsyncio cache manager for redis, memcached and memory
Home Page: http://aiocache.readthedocs.io
License: BSD 3-Clause "New" or "Revised" License
Asyncio cache manager for redis, memcached and memory
Home Page: http://aiocache.readthedocs.io
License: BSD 3-Clause "New" or "Revised" License
Lot of bugs found, need to improve testing with the keys_attribute thing
Wasnt bugs at all but feature. Should rethink about its behavior:
Its time to start adding some checks to the operations (without taking into account latencies with external storages) since caching should be as transparent as posible for the users.
keys_attribute, key and so are confusing, lets rename them to something more understandable
Not enough with ConnectionError, the original function should be called if the cache is not working properly.
Also a proposal would be to pass a list of exceptions to forward/catch?
Some operations need to have knowledge of the keys stored in the storage. LRUPolicy is tracking the keys in memory but this is wrong because the state is lost when the process exits.
Should modify the behavior so the operations that change db status keep a list of the existing keys also in the same storage
Custom serializer class expects serialize
/deserialize
methods. Seems reasonable to move them to expect dumps
/loads
since pickle, json, marshmallow and others use that.
Provide a memcached implementation following the BaseCache
specification
All backends must support cleaning the cache.
This is not as easy at seems, should every instance keep a list of keys that has added so it removes only those? Should it work by namespace? What if another key starts by that namespace unintentionally? we would remove that and it's wrong... Is it time to start using different databases for each namespace (check how to do that with memcached)?
https://github.com/aio-libs/aiopg
https://github.com/MagicStack/asyncpg
Check if there are other async libs
Examples are a good way to cover use cases, lets use them as acceptance. To do so we need:
Once the refactor is done, backends will be just simple wrappers for connecting with the underlying clients like aioredis, aiomcached, dict, etc.
Try to think of a mixins approach in order be able to call the client commands as if you were using it straightforward.
Now that mget, mset support has been added. A decorator for setting/retrieving multiple keys can be implemented.
The decorator should be able to retrieve the available keys and just query for the missing ones.
Policies should be transformed to Hook mechanism, this way the user can implement whatever behavior he wants. The naming will change so it will introduce a breaking change. Also, in order for the user to implement interesting actions, some data the hooks should receive is:
TODO:
function must be called anyway and not propagate exception if connection refused.
def cached(cache=None,...):
cache = get_cache(cache=cache, ...)
# ...
def cached_decorator(fn):
# ...
return cached_decorator
Basically, get_cache
returns new instance of backend for every call.
This mean that every decorated function will have its own instance of backend
and that every such function will create at least one connection to some
remote resource (redis, memcached, etc) (only if its not MemoryBackend)
and this might become a problem.
This will give the option to combine with other input args
Using RedisCache, SimpleMemoryCache and company as user classes, we are facing some design problems like reusing code, making serializer aware of the backend being used is difficult, etc.
A new design will be done were a main Cache
class will be used. This class will contain the three main components:
The class will expose the interface to interact with the cache and wrap the backend calls with the serializer and policy calls.
There is an encoding
attr defined for serializer because in some cases it is needed for aioredis. To keep the interface clean, the encoding part should be done in the serializer layer. This will help also in the case of memcached
implementation
When saving 1, it retrieves "1"
TODO:
Add instructions for contributors. Before that, tests should be able to be run with docker env (needing to install redis and memcached for running them sucks).
A file for CHANGES should also be added for keeping track together with CONTRIBUTORS file.
For debug purposes it would be nice to be able to disable the cache. Something like "AIOCACHE_ENABLED=0" so caching would be disabled for all the calls.
https://github.com/aio-libs/aiomysql
Check if there are other async libs
Some serializers may not be compatible with a given backend. This error would be better if its raised during instantiation time rather than when calling the actual operations. The error should also return which serializers are supported.
if a namespace is passed, it should create a new instance of the default config but with the new namespace. Things to do:
Autodoc not working in RTD...
Right now it only works with functions returning dicts and knows the keys if a param called keys is passed. Some modifications to improve its flexibility:
Clients may need some functionality that is not supported by the exposed interface. Add a new .raw
function to access the client directly.
Right now caching can only be done explicitly. A decorator should be provided.
Right now many loops are being used to process the keys in the mget
and mset
calls. This should be improved
This would help to avoid problems like race conditions, dogpile effect, etc.
Some ideas:
References:
Couple of things to keep in mind:
Redis
For now I'm thinking about going for using the simple approach described in https://redis.io/topics/distlock#correct-implementation-with-a-single-instance (lease time should be able to be passed by the user).
Memcached
https://bluxte.net/musings/2009/10/28/simple-distributed-lock-memcached/
https://github.com/memcached/memcached/wiki/ProgrammingTricks#avoiding-stampeding-herd
Memory
Just go with asyncio.Lock
The option should be compatible with the decorator too. Lets put a simple case:
Imagine we want to store a maximum number of keys for each different function call. By specifying a max per backend, we may end up doing lots of cache misses because we are deleting keys from other functions.
There isn't much problem in general because each decorator instantiates its own instance of the backend. The only edge case is when using the config_default_cache
where the developer may put a small size for the keys and then use it for multiple function calls. Maybe a warning could be displayed if the developer is configuring the cache with the max_keys
option.
Challenges:
The package is too silent =/
Most of the methods for each backend are doing the same. Try to just use one test file and passing the tests for each backend available.
Also Redis and Memcached should run in containers when launching the tests.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.