Giter Club home page Giter Club logo

cosmos-cache's Introduction

Optimize Cosmos query calls with this chain syncronized caching layer.

This program sits on top of another server and acts as a middleware between the requesting client and the actual cosmos RPC/API server.

It supports

  • Variable length cache times (for both RPC methods & REST URL endpoints)

  • Disable specific endpoints entirely from being queried (ex: REST API /accounts)

  • Enable cache only until the next block (via Tendermint RPC event subscription)

  • Cached RPC request

  • Cached REST request

  • Swagger + OpenAPI support (openapi.yml cached)

  • HttpBatchClient (for RPC with Tendermint 0.34 client)

  • Statistics (optional /stats endpoint with password)

  • Websocket basic passthrough support for Keplr wallet

  • Index blocks (TODO?)

Public Endpoints

Juno

Akash

CosmosHub

Comdex

Chihuahua

Injective

Pre-Requirements

  • A Cosmos RPC / REST server endpoint (state synced, full node, or archive).
  • A reverse proxy (to forward subdomain -> the endpoint cache on a machine)

NOTE In the past, Redis was used. If you wish to use Redis still it can be found in v0.0.8

Where to run

Ideally, you should run this on your RPC/REST Node for localhost queries. However, you can also run on other infra including on your reverse proxy itself, or another separate node. This makes it possible to run on cloud providers like Akash, AWS, GCP, Azure, etc.


Setup

python3 -m pip install -r requirements/requirements.txt --upgrade

# Edit the ENV file to your needs
cp configs/.env .env

# Update which endpoints you want to disable / allow (regex) & how long to cache each for.
cp configs/cache_times.json cache_times.json

# THen run to ensure it was setup correctly
python3 rest.py
# ctrl + c
python3 rpc.py
# ctrl + c

# If all is good, continue on.
# NOTE: You can only run 1 of each locally at a time because WSGI is a pain. Requires Systemd as a service to run both in parallel.

# Then point your NGINX / CADDY config to this port rather than the default 26657 / 1317 endpoints

Running in Production

Documentation

cosmos-cache's People

Contributors

charlesjudith avatar reecepbcups avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cosmos-cache's Issues

Redis Memory Limit & no DB dump

My redis instance took up all my memory since it kept saving all keys to a db. We need to disable this

code /etc/redis/redis.conf

# maxmemory <bytes>
maxmemory 16gb

# maxmemory-samples 5
maxmemory-samples 5

# stop-writes-on-bgsave-error yes
stop-writes-on-bgsave-error no

# then comment out the following lines
save 900 1
save 300 10
save 60 10000

systemctl restart redis-server

Cache batch rpc req

Iterate each in order of request array, pair to return values (which comes back in the same array order)

Multinode round robin

in the .env, comma seperated.

I guess Ill state sync another node?

in the Req_handler.py, randomly select one.
If bad, time it out in memory for X period of time.

Allow backup to be round robin? Maybe in the future

Clear data on new block option

clear non long term data (blocks, Txs, etc) on the mint of a new block
ensures data is up to date within time params of block production

allow configuration for networks like EVMOS with 1-2s blocks

In config:
0 = no cache
-1 = disabled
-2 = clear cache when new block is made

testing

docker run, run Txs and queries, ensure data matches. Good

Get ready for prod / cleanup repo & instructions

Just edit .env and cache_times.json (not pretty yet)

for best performance

# == QUERY INCREMENT LOGGING ===
ENABLE_COUNTER=false
INCREASE_COUNTER_EVERY=250
STATS_PASSWORD="" # blank = no password   -- https://rest_endpoint/stats?password=mypass

required redis to be installed. Default config is private no password / outside connection

point the Reverse proxy to it instead of your RPC / REST API endpoints

This will break Keplr support, but handles normal requests for webapps fine

./run_rpc.sh has all the info you need to setup the the service, and start
RPC is required to restart 1 time per day via crontab for memory with redis for now

WORKERS * THREADS = the number of CPU threads you have. Threads is more important than workers. You can increase the other 2 as much as you want depending on request amount

narrow down abci_query bypass POST

In the RPC POST, if the method is method != "abci_query" we can NOT cache it for some reason

error: response id (0) does not match request id (1)

temp fix is method = abci_query, we do not cache it.

In the future, we should only allow specific paths (via .env) so we can bypass? ex: /cosmwasm.wasm.v1.Query/SmartContractState, Account. Balances, etc

Stress Test tool / DDoS against endopoints

Hit endpoints with configurable data (status, abci_query, etc) on the RPC

curl, python, TS, go - what ever is easiest
dockerize it

can then launch on akash with like 20 instances to spam requests at a node

Migration docs

Easy solution:

  • move RPC from 26657 to another port
  • launch run_rpc.sh on 26657
  • this way you don't have to alter nginx / change efirewall

Move to FastAPI

can we still use gunicorn? or do need to use univicorn instead. Still multithreaded?

(removes need of jsonify'ing data)

Ensure to use httpx properly with async requests rather than blocking

Combine RPC&REST to a single systemd service / cache

Could be cool based off user requests

  • On load, get all possible REST endpoints via the openapi. Use * regex for any { } areas
  • if user requests a regex match, forward to 1317
  • else 26657 rpc / websocket req

Compile RPC links -> open api, and inject into REST openapi. Place on top of the REST api UI

bypass system?

IP address or special query data required?
JSON file.

Could have groups which can be applied. not sure this makes sense though since a node op can just have a seperate A record pointing to the machine's non cached endpoint

test akash failover

normal url for standard (machine)
fail over to akash, fails asdnokasnodjkasjniod.dns.com

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.