- Install Python
- Install Docker
- Install Node.js
- Clone the repository:
git clone [email protected]:carogomezt/django-redis-demo.git
- Go to the project folder and create a new virtual environment:
mkdir django-redis-demo python3 -m venv venv source venv/bin/activate
- Install the dependencies:
pip install -r requirements.txt
- Run the migrations:
python manage.py migrate
- Create a super user:
python manage.py createsuperuser
- To test if everything was installed right, run the app and see that it shows some content:
python manage.py run server
Note: If you have Redis installed locally you could skip the next step.
- Stop the running app and run the docker image, we are using here a docker image of Redis to avoid additional installations:
docker-compose up
You could see that the cache is defined under django_cache/settings.py
. Here you could define your caching service, in our case is Redis:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://redis:6379/",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient"
},
}
}
The reason why we installed django-redis
is because it will act as a client that will abstract the communication we have with Redis.
Under store/views.py
you could see that we have two views, the first one is the products view which calls the database every time that we want to return all products.
The second view is the one who uses caching, here we check if the caching key is saved, if that is not the case it retrieves the data from the database and save it in the caching system,
it uses the default time out of 300 seconds which is 5 minutes.
- Normal view: http://localhost:8000/products
- Cached View: http://localhost:8000/cached_products
To test these APIs we have to conduct a load test. I will be using a npm package called loadtest.
You can install loadtest globally using this command:
sudo npm install -g loadtest
Once that is done, let's test our non-cached API. Run the command:
loadtest -n 100 -k http://localhost:8000/products
After running that command we could see the following result:
INFO Completed requests: 100
INFO Total errors: 0
INFO Total time: 2.547047284 s
INFO Requests per second: 39
INFO Mean latency: 25.1 ms
This means that our API can handle only 39 requests per second.
Let's test the cached API.
loadtest -n 100 -k http://localhost:8000/cached_products
After running that command we could see the following result:
INFO Completed requests: 100
INFO Total errors: 0
INFO Total time: 1.413555047 s
INFO Requests per second: 71
INFO Mean latency: 13.9 ms
The result is much better, almost double the amount of requests. Why is this possible?, Because the first time you hit the cache endpoint the application will query the information from the database but subsequent calls to the same URL will bypass the database and query from the cache because the data is already available.
If you want to see how everything is stored in Redis you can use the Redis CLI, open up a terminal in your Redis container and run:
redis-cli
To see the keys you could run:
127.0.0.1:6379> KEYS *
And to see how values are stored in Redis you could run this command:
127.0.0.1:6379> GET :1:product