Giter Club home page Giter Club logo

redis-cell's Introduction

redis-cell Build Status

Warning: This package is in "best effort" maintenance mode. I try to respond to opened issues and keep it reasonably up-to-date with respect to the underlying Rust toolchain, but am no longer actively developing it.

A Redis module that provides rate limiting in Redis as a single command. Implements the fairly sophisticated generic cell rate algorithm (GCRA) which provides a rolling time window and doesn't depend on a background drip process.

The primitives exposed by Redis are perfect for doing work around rate limiting, but because it's not built in, it's very common for companies and organizations to implement their own rate limiting logic on top of Redis using a mixture of basic commands and Lua scripts (I've seen this at both Heroku and Stripe for example). This can often result in naive implementations that take a few tries to get right. The directive of redis-cell is to provide a language-agnostic rate limiter that's easily pluggable into many cloud architectures.

Informal benchmarks show that redis-cell is pretty fast, taking a little under twice as long to run as a basic Redis SET (very roughly 0.1 ms per command as seen from a Redis client).

Install

Binaries for redis-cell are available for Mac and Linux. Open an issue if there's interest in having binaries for architectures or operating systems that are not currently supported.

Download and extract the library, then move it somewhere that Redis can access it (note that the extension will be .dylib instead of .so for Mac releases):

$ tar -zxf redis-cell-*.tar.gz
$ cp libredis_cell.so /path/to/modules/

Or, clone and build the project from source. You'll need to install Rust to do so (this may be as easy as a brew install rust if you're on Mac).

$ git clone https://github.com/brandur/redis-cell.git
$ cd redis-cell
$ cargo build --release
$ cp target/release/libredis_cell.dylib /path/to/modules/

Note that Rust 1.13.0+ is required.

Run Redis pointing to the newly built module:

redis-server --loadmodule /path/to/modules/libredis_cell.so

Alternatively add the following to a redis.conf file:

loadmodule /path/to/modules/libredis_cell.so

Usage

From Redis (try running redis-cli) use the new CL.THROTTLE command loaded by the module. It's used like this:

CL.THROTTLE <key> <max_burst> <count per period> <period> [<quantity>]

Where key is an identifier to rate limit against. Examples might be:

  • A user account's unique identifier.
  • The origin IP address of an incoming request.
  • A static string (e.g. global) to limit actions across the entire system.

For example:

CL.THROTTLE user123 15 30 60 1
               ▲     ▲  ▲  ▲ ▲
               |     |  |  | └───── apply 1 token (default if omitted)
               |     |  └──┴─────── 30 tokens / 60 seconds
               |     └───────────── 15 max_burst
               └─────────────────── key "user123"

Response

This means that a single token (the 1 in the last parameter) should be applied against the rate limit of the key user123. 30 tokens on the key are allowed over a 60 second period with a maximum initial burst of 15 tokens. Rate limiting parameters are provided with every invocation so that limits can easily be reconfigured on the fly.

The command will respond with an array of integers:

127.0.0.1:6379> CL.THROTTLE user123 15 30 60
1) (integer) 0
2) (integer) 16
3) (integer) 15
4) (integer) -1
5) (integer) 2

The meaning of each array item is:

  1. Whether the action was limited:
    • 0 indicates the action is allowed.
    • 1 indicates that the action was limited/blocked.
  2. The total limit of the key (max_burst + 1). This is equivalent to the common X-RateLimit-Limit HTTP header.
  3. The remaining limit of the key. Equivalent to X-RateLimit-Remaining.
  4. The number of seconds until the user should retry, and always -1 if the action was allowed. Equivalent to Retry-After.
  5. The number of seconds until the limit will reset to its maximum capacity. Equivalent to X-RateLimit-Reset.

Multiple Rate Limits

Implement different types of rate limiting by using different key names:

CL.THROTTLE user123-read-rate 15 30 60
CL.THROTTLE user123-write-rate 5 10 60

On Rust

redis-cell is written in Rust and uses the language's FFI module to interact with Redis' own module system. Rust makes a very good fit here because it doesn't need a GC and is bootstrapped with only a tiny runtime.

The author of this library is of the opinion that writing modules in Rust instead of C will convey similar performance characteristics, but result in an implementation that's more likely to be devoid of the bugs and memory pitfalls commonly found in many C programs.

License

This is free software under the terms of MIT the license (see the file LICENSE for details).

Development

Tests and checks

Run the test suite:

cargo test

# specific test
cargo test it_rates_limits

# with debug output on stdout
cargo test it_rates_limits -- --nocapture

CI has checks for both Rustfmt and Clippy (Rust's linter). These can be installed and run locally using Rustup's component framework:

rustup component add rustfmt
cargo fmt --all

rustup component add clippy
cargo clippy -- -D warnings

Releasing

Releases are performed automatically from a script in CI which activates when a new tag of the format v1.2.3 is released. The script builds binaries for all target systems and uploads them to GitHub's releases page.

To perform a release:

  1. Add a changelog entry in CHANGELOG.md using the existing format.
  2. Bump the version number in Cargo.toml.
  3. Commit these changes with a message like Bump to version 1.2.3.
  4. Tag the release with git tag v1.2.3 (make sure to include a leading v).
  5. ggpush --tags
  6. Edit the new release's title and body in GitHub (a human touch is still expected for the final product). Use the contents for the new version from CHANGELOG.md as the release's body, which allows Markdown content.

redis-cell's People

Contributors

brandur avatar dayyan avatar dwerner avatar fxn avatar profporridge avatar tuananh avatar yourzbuddha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis-cell's Issues

Why Retry-After and Reset is the same?

Should X-RateLimit-Reset response be number of seconds until maximum capacity of limit? If so, in my example it is always the same as Retry-After.

irb(main):004:0> 60.times { p redis.call('CL.THROTTLE', 'testkey', 0, 3, 60, 1); sleep 1 }
[0, 1, 0, -1, 20]
[1, 1, 0, 18, 18]
[1, 1, 0, 17, 17]
[1, 1, 0, 16, 16]
[1, 1, 0, 15, 15]
[1, 1, 0, 14, 14]
[1, 1, 0, 13, 13]
[1, 1, 0, 12, 12]
[1, 1, 0, 11, 11]
[1, 1, 0, 10, 10]
[1, 1, 0, 9, 9]
[1, 1, 0, 8, 8]
[1, 1, 0, 7, 7]
[1, 1, 0, 6, 6]
[1, 1, 0, 5, 5]
[1, 1, 0, 4, 4]
[1, 1, 0, 3, 3]
[1, 1, 0, 2, 2]
[1, 1, 0, 1, 1]
[1, 1, 0, 0, 0]
[0, 1, 0, -1, 20]
[1, 1, 0, 18, 18]
[1, 1, 0, 17, 17]
[1, 1, 0, 16, 16]
[1, 1, 0, 15, 15]
[1, 1, 0, 14, 14]
[1, 1, 0, 13, 13]
[1, 1, 0, 12, 12]
[1, 1, 0, 11, 11]
[1, 1, 0, 10, 10]
[1, 1, 0, 9, 9]
[1, 1, 0, 8, 8]
[1, 1, 0, 7, 7]
[1, 1, 0, 6, 6]
[1, 1, 0, 5, 5]
[1, 1, 0, 4, 4]
[1, 1, 0, 3, 3]
[1, 1, 0, 2, 2]
[1, 1, 0, 1, 1]
[1, 1, 0, 0, 0]
[0, 1, 0, -1, 20]
[1, 1, 0, 18, 18]
[1, 1, 0, 17, 17]
[1, 1, 0, 16, 16]
[1, 1, 0, 15, 15]
[1, 1, 0, 14, 14]
[1, 1, 0, 13, 13]
[1, 1, 0, 12, 12]
[1, 1, 0, 11, 11]
[1, 1, 0, 10, 10]
[1, 1, 0, 9, 9]
[1, 1, 0, 8, 8]
[1, 1, 0, 7, 7]
[1, 1, 0, 6, 6]
[1, 1, 0, 5, 5]
[1, 1, 0, 4, 4]
[1, 1, 0, 3, 3]
[1, 1, 0, 2, 2]
[1, 1, 0, 1, 1]
[1, 1, 0, 0, 0]

Reconfiguration caused unexpected behavior

Firstly, it works good, as we can seen below.

127.0.0.1:6579> CL.THROTTLE user1 15 10 20 1
1) (integer) 0
2) (integer) 16
3) (integer) 15
4) (integer) -1
5) (integer) 2
127.0.0.1:6579> CL.THROTTLE user1 15 10 20 1
1) (integer) 0
2) (integer) 16
3) (integer) 15
4) (integer) -1
5) (integer) 2
127.0.0.1:6579> CL.THROTTLE user1 15 10 20 1
1) (integer) 0
2) (integer) 16
3) (integer) 14
4) (integer) -1
5) (integer) 3
127.0.0.1:6579> CL.THROTTLE user1 15 10 20 1
1) (integer) 0
2) (integer) 16
3) (integer) 13
4) (integer) -1
5) (integer) 4
...

Then I tried reconfigured it. I just increased rate and it caused unexpected behavior. The returned values got deny. Why it returned deny since I increased rate?

127.0.0.1:6579> CL.THROTTLE user1 15 10 20 1
1) (integer) 0
2) (integer) 16
3) (integer) 13
4) (integer) -1
5) (integer) 4
127.0.0.1:6579> CL.THROTTLE user1 15 1000 20 1
1) (integer) 1
2) (integer) 16
3) (integer) 0
4) (integer) 2
5) (integer) 2
127.0.0.1:6579> CL.THROTTLE user1 15 1000 20 1
1) (integer) 1
2) (integer) 16
3) (integer) 0
4) (integer) 1
5) (integer) 1

I guess it is because time need reset to its maximum capacity is not zero so when I change configuration it will deny unit it run out of last seconds. And I think it was not conforming to the statement in readme which said: Rate limiting parameters are provided with every invocation so that limits can easily be reconfigured on the fly.

[BENCHMARK] Up to 1,100% faster than ratelimit.js

👍

Rough Specification

As someone with obviously a ton of experience with rate limiting, would be happy to hear your thoughts on the abstraction (if you have the time, of course).

I've been doing really extensive testing for load-testing etc on it and hope to put this into production where it would def be tested against some serious load!

More tests and examples of implementation (including the aggregated result) are in the specification above.

tl;dr

Just want to say thank you! as this provides a huge performance improvement over what is pretty much the only other established method I have found ON TOP OF actually implementing GCRA rather than a simple "how many requests were made in the last n seconds.

Stress Test Results

Calls Setup, Starting Requests
Running Tests for:  cell
[ 'user', '1', 'trade' ]
[ 'user', '2', 'trade' ]
[ 'user', '3', 'trade' ]
[ 'user', '4', 'trade' ]
[ 'user', '5', 'trade' ]
[ 'user', '6', 'trade' ]
[ 'user', '7', 'trade' ]
Running Tests for:  rl
[ '1', 60, 'user:1:trade' ]
[ '2', 60, 'user:2:trade' ]
[ '3', 60, 'user:3:trade' ]
[ '4', 60, 'user:4:trade' ]
[ '5', 60, 'user:5:trade' ]
[ '6', 60, 'user:6:trade' ]
[ '7', 60, 'user:7:trade' ]

    --- Test Results ---

    Total Iterations: 5000 * 7 Tests (35,000 iterations each)

    rl:
      Total Duration: 343693971.74181193
      Average: 9819.82776405177
      Max: 14198.909738004208
      Min: 5541.352702006698

    cell:
      Total Duration: 26785993.573875546
      Average: 765.3141021107299
      Max: 1454.7679300010204
      Min: 322.44201999902725

    Diff: cell is 1183% faster

Quick Test Results (consistent against around 500 runs of the test)

// RUN FOR CELL ONLY (Dont Require rl or include in scope at all)
Calls Setup, Starting Requests
Running Tests for:  cell
[ 'user', '1', 'trade' ]
[ 'user', '2', 'trade' ]
[ 'user', '3', 'trade' ]
[ 'user', '4', 'trade' ]
[ 'user', '5', 'trade' ]
[ 'user', '6', 'trade' ]
[ 'user', '7', 'trade' ]

    --- Test Results ---

    Total Iterations: 10 * 7 Tests (70 iterations each)

    cell:
      Total Duration: 231.97511593997478
      Average: 3.3139302277139255
      Max: 4.6971050053834915
      Min: 2.4761980026960373

// RUN FOR RL ONLY (Dont Require cell or include in scope at all)
Calls Setup, Starting Requests
Running Tests for:  rl
[ '1', 60, 'user:1:trade' ]
[ '2', 60, 'user:2:trade' ]
[ '3', 60, 'user:3:trade' ]
[ '4', 60, 'user:4:trade' ]
[ '5', 60, 'user:5:trade' ]
[ '6', 60, 'user:6:trade' ]
[ '7', 60, 'user:7:trade' ]

    --- Test Results ---

    Total Iterations: 10 * 7 Tests (70 iterations each)

    rl:
      Total Duration: 2249.6415529847145
      Average: 32.13773647121021
      Max: 36.03293499350548
      Min: 28.242240011692047

Implementation / Benchmark

So this is definitely not even a fair benchmark considering the module runs as a native module and I implemented lua logic to handle the situation - but my goal was to implement a multi-bucket rate limiter using this as a base.

Team is super spooked by the "note on stability" part though and wants to go with RateLimit.js anyway which is completely client side :-\

Essentially I take a limits.yaml and use it to build a rate limiting bucket:

schema:
  # user:
  user:
    children:
      # if no others match
      "*":
        # limits against user:${username}
        limits:
          - 15
          - 30
          - 60
        children:
          # limits against user:${username}:trade
          trade:
            limits:
              - 5
              - 10
              - 15

Then I receive a "bucket key" which is an array of arguments ("user", "myuserid", "trade") and run against each step when limits is found, returning the results so that we can easily and efficiently implement multi-tiered rate limits against user actions with as few requests as possible.

So essentially when I call ("user", "myuserid", "trade") it is running

CL.THROTTLE user:myuserid 15 30 60
CL.THROTTLE user:myuserid:trade 5 10 15
--[[
  Summary:
    Takes the generated lua limits table (from limits.yaml which must be compiled 
    whenever we need to change it) and iterates the request, returning the results
    of all matching rate limits in the requested path.
]]
local LimitsSchema = {
  ["user"] = {
    ["children"] = {["*"] = {["limits"] = {15, 30, 60}, ["children"] = {["trade"] = {["limits"] = {5, 10, 15}}}}}
  }
}

local Response, LimitsTable, CurrentPath = {}, {}, {}
local Complete = 1
local child

local CheckLimit = function(bucket, args)
  return redis.call("CL.THROTTLE", bucket, unpack(args))
end

for k, v in ipairs(KEYS) do
  if LimitsSchema[v] then
    child = LimitsSchema[v]
  elseif LimitsSchema["*"] then
    child = LimitsSchema["*"]
  else
    Complete = 0
    break
  end

  table.insert(CurrentPath, v)

  if child["limits"] then
    LimitsTable[table.concat(CurrentPath, ":")] = child["limits"]
  end
  LimitsSchema = child["children"]
end

for k, v in pairs(LimitsTable) do
  table.insert(Response, CheckLimit(k, v))
end

if Complete == 0 then
  return redis.error_reply("Invalid Path at: " .. table.concat(CurrentPath, ":"))
end

return Response

Lua Performance: In an initial check with the lua performance when running the benchmark, looks like the latency of the request being made averaged at around 0.50 milliseconds per call... pretty awesome!

Then utilizing ioredis to handle the script

// static setup that shouldnt count towards perf since it is
// done one time per server instance
const Redis = require("ioredis");
const fs = require("fs");
const path = require("path");
const redis = new Redis();

const cmd = {
  lua: fs.readFileSync(
    path.resolve(__dirname, "..", "lua", "traverse.lua")
  )
};

redis.defineCommand("limiter", cmd);

module.exports = redis;
const { performance } = require("perf_hooks");
const redis = require("./cell-setup");

function checkPath(...path) {
  return redis.limiter(path.length, ...path);
}

async function request(...args) {
  const startTime = performance.now();
  await checkPath(...args);
  return performance.now() - startTime;
}

module.exports = {
  request
};

Documentation

Hi, do you have some documentation, examples for commands?

Doubts about remaining value

I think that the remaining value is incorrect, because it takes into account the max burst, but not the count per period.

2 examples explaining my point of view. I am doing request to CL.THROTTLE every ~250 milliseconds with the following configurations

Example with: CL.THROTTLE thekey 0 2 1

Req	millis	limited	limit	remaining retry	reset
1 	259 	0 	1 	0 	-1 	0
2 	533 	0 	1 	0 	-1 	0
3 	783 	0 	1 	0 	-1 	0
4 	1035 	0 	1 	0 	-1 	1
5 	1289 	0 	1 	0 	-1 	1
6 	1541 	1 	1 	0 	1 	1
7 	1790 	0 	1 	0 	-1 	1
8 	2040 	1 	1 	0 	1 	1

I would expect:

Req	millis	limited	limit	remaining retry	reset
1 	259 	0 	1 	2 	-1 	0
2 	533 	0 	1 	1 	-1 	0
3 	783 	0 	1 	1 	-1 	0
4 	1035 	0 	1 	0 	-1 	1
5 	1289 	0 	1 	0 	-1 	1
6 	1541 	1 	1 	0 	1 	1
7 	1790 	0 	1 	0 	-1 	1
8 	2040 	1 	1 	0 	1 	1

Example with: CL.THROTTLE thekey 2 2 1

Req	millis	limited	limit	remaining retry	reset
1 	259 	0 	3 	2 	-1 	0
2 	512 	0 	3 	1 	-1 	0
3 	765 	0 	3 	1 	-1 	0
4 	1016 	0 	3 	0 	-1 	1
5 	1268 	0 	3 	0 	-1 	1
6 	1519 	0 	3 	0 	-1 	1
7 	1769 	0 	3 	0 	-1 	1
8 	2021 	0 	3 	0 	-1 	2
9 	2273 	0 	3 	0 	-1 	2
10 	2523 	1 	3 	0 	1 	2
11 	2775 	0 	3 	0 	-1 	2
12 	3025 	1 	3 	0 	1 	2

I would expect:

Req	millis	limited	limit	remaining retry	reset
1 	259 	0 	3 	4 	-1 	0
2 	512 	0 	3 	3 	-1 	0
3 	765 	0 	3 	3 	-1 	0
4 	1016 	0 	3 	2 	-1 	1
5 	1268 	0 	3 	2 	-1 	1
6 	1519 	0 	3 	1 	-1 	1
7 	1769 	0 	3 	1 	-1 	1
8 	2021 	0 	3 	0 	-1 	2
9 	2273 	0 	3 	0 	-1 	2
10 	2523 	1 	3 	0 	1 	2
11 	2775 	0 	3 	0 	-1 	2
12 	3025 	1 	3 	0 	1 	2

What do you think @brandur? Am I forgetting something?

Thanks!!

CI fixes welcome?

- name: "Get release info"
id: get_release_info
run: |
echo ::set-output name=file_name::${REPOSITORY_NAME##*/}-${TAG_REF_NAME##*/}-${{ matrix.target }}.tar.gz # RepositoryName-v1.0.0-arch.tar.gz
value=`cat release_url/release_url.txt`
echo ::set-output name=upload_url::$value

I noticed this library still uses ::set-output, which is deprecated. The fix (4 lines) shouls be straightforward.

-echo ::set-output name=file_name::${REPOSITORY_NAME##*/}-${TAG_REF_NAME##*/}-${{ matrix.target }}.tar.gz # RepositoryName-v1.0.0-arch.tar.gz
+echo "name=file_name::${REPOSITORY_NAME##*/}-${TAG_REF_NAME##*/}-${{ matrix.target }}.tar.gz" >> "$GITHUB_OUTPUT" # RepositoryName-v1.0.0-arch.tar.gz

I see CI is also throwing rust errors. I don't mind taking a look, if I'm to update the CI anyway. Would a PR be welcome?

Returning Unused Tokens

I've been utilizing redis-cell for a research project and first and foremost thank you for your work.

I'm in a situation where I want to all or nothing check two different keys that can be throttled. I need to check one, have it pass then check the other and have it also pass before I approve the action. I perform the entire atomic operation via a LUA script.

In the event the second key cl.throttle returns 1 (throttled), is it appropriate to return the first via:

cl.throttle key burst max period -1

Preliminary tests seem to indicate this works, but I'm unsure if this was intentional. Is there a potential fringe case where this would fail or un-optimize key usage?

Redis server failed with zero limit

When I try to simulate many parallel calls with zero limit param:

1000.times { redis.call 'CL.THROTTLE', 'test', 0, 0, 60, 1 }

Redis server has failed and stopped with following report

=== REDIS BUG REPORT START: Cut & paste starting from here ===
1:M 05 Apr 2020 18:38:24.582 # Redis 5.0.8 crashed by signal: 11
1:M 05 Apr 2020 18:38:24.582 # Crashed running the instruction at: 0x7fa355b70611
1:M 05 Apr 2020 18:38:24.582 # Accessing address: (nil)
1:M 05 Apr 2020 18:38:24.582 # Failed assertion: <no assertion failed> (<no file>:0)
redis_1  |
------ STACK TRACE ------
thread '<unnamed>' panicked at 'assertion failed: nsec >= 0 && nsec < NSEC_PER_SEC', /cargo/registry/src/github.com-1ecc6299db9ec823/time-0.1.42/src/lib.rs:86:9
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
fatal runtime error: failed to initiate panic, error 5
EIP:
/lib/x86_64-linux-gnu/libc.so.6(abort+0x1fd)[0x7fa355b70611]
redis_1  |
Backtrace:
redis-server *:6379(logStackTrace+0x32)[0x55741d7dc592]
redis-server *:6379(sigsegvHandler+0x9e)[0x55741d7dcc6e]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x12730)[0x7fa355d21730]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x1fd)[0x7fa355b70611]
/usr/local/etc/redis/libredis_cell.so(+0x97757)[0x7fa354f85757]
/usr/local/etc/redis/libredis_cell.so(+0x85031)[0x7fa354f73031]
/usr/local/etc/redis/libredis_cell.so(rust_panic+0x7a)[0x7fa354f74fba]
/usr/local/etc/redis/libredis_cell.so(_ZN3std9panicking20rust_panic_with_hook17h744417edfe714d72E+0x2d2)[0x7fa354f74e72]
/usr/local/etc/redis/libredis_cell.so(+0x5d975)[0x7fa354f4b975]
/usr/local/etc/redis/libredis_cell.so(_ZN50_$LT$time..Tm$u20$as$u20$core..cmp..PartialOrd$GT$11partial_cmp17h44a03615e3a2ca19E+0x197)[0x7fa354f49ff7]
/usr/local/etc/redis/libredis_cell.so(+0x55301)[0x7fa354f43301]
/usr/local/etc/redis/libredis_cell.so(_ZN74_$LT$redis_cell..ThrottleCommand$u20$as$u20$redis_cell..redis..Command$GT$3run17h1e54095770ea729bE+0x258)[0x7fa354f44d98]
/usr/local/etc/redis/libredis_cell.so(+0x4fb55)[0x7fa354f3db55]
redis-server *:6379(RedisModuleCommandDispatcher+0x54)[0x55741d808d84]
redis-server *:6379(call+0x9b)[0x55741d79835b]
redis-server *:6379(processCommand+0x51e)[0x55741d798c1e]
redis-server *:6379(processInputBuffer+0x171)[0x55741d7a8e21]
redis-server *:6379(aeProcessEvents+0x101)[0x55741d792311]
redis-server *:6379(aeMain+0x2b)[0x55741d79271b]
redis-server *:6379(main+0x4ca)[0x55741d78f57a]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)[0x7fa355b7209b]
redis-server *:6379(_start+0x2a)[0x55741d78f7da]
redis_1  |
------ INFO OUTPUT ------
# Server
redis_version:5.0.8
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:f5de7c59791f2d0a
redis_mode:standalone
os:Linux 4.19.76-linuxkit x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:8.3.0
process_id:1
run_id:ecbdfa470a829ff934bea5e4fec66b7cd1ca9ef6
tcp_port:6379
uptime_in_seconds:94
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:9053600
executable:/data/redis-server
config_file:
redis_1  |
# Clients
connected_clients:1
client_recent_max_input_buffer:2
client_recent_max_output_buffer:0
blocked_clients:0
redis_1  |
# Memory
used_memory:854112
used_memory_human:834.09K
used_memory_rss:4956160
used_memory_rss_human:4.73M
used_memory_peak:854112
used_memory_peak_human:834.09K
used_memory_peak_perc:100.14%
used_memory_overhead:840710
used_memory_startup:790888
used_memory_dataset:13402
used_memory_dataset_perc:21.20%
allocator_allocated:943392
allocator_active:1126400
allocator_resident:3534848
total_system_memory:4129947648
total_system_memory_human:3.85G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.19
allocator_frag_bytes:183008
allocator_rss_ratio:3.14
allocator_rss_bytes:2408448
rss_overhead_ratio:1.40
rss_overhead_bytes:1421312
mem_fragmentation_ratio:6.10
mem_fragmentation_bytes:4144272
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:49694
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
redis_1  |
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1586111810
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0
redis_1  |
# Stats
total_connections_received:1
total_commands_processed:52
instantaneous_ops_per_sec:0
total_net_input_bytes:3051
total_net_output_bytes:1834
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:51
expired_stale_perc:0.26
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:1
keyspace_misses:52
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
redis_1  |
# Replication
role:master
connected_slaves:0
master_replid:94a55afbe34d3d42c9e09cbbd7073f0b8ef88e0a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
redis_1  |
# CPU
used_cpu_sys:0.042171
used_cpu_user:0.034461
used_cpu_sys_children:0.002327
used_cpu_user_children:0.003539
redis_1  |
# Commandstats
cmdstat_cl.throttle:calls=52,usec=3994,usec_per_call=76.81
redis_1  |
# Cluster
cluster_enabled:0
redis_1  |
# Keyspace
db0:keys=1,expires=1,avg_ttl=0
redis_1  |
------ CLIENT LIST OUTPUT ------
id=3 addr=172.19.0.1:41424 fd=8 name= age=51 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=61 qbuf-free=32707 obl=0 oll=0 omem=0 events=r cmd=cl.throttle
redis_1  |
------ CURRENT CLIENT INFO ------
id=3 addr=172.19.0.1:41424 fd=8 name= age=51 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=61 qbuf-free=32707 obl=0 oll=0 omem=0 events=r cmd=cl.throttle
argv[0]: 'CL.THROTTLE'
argv[1]: 'test'
argv[2]: '0'
argv[3]: '0'
argv[4]: '60'
argv[5]: '1'
1:M 05 Apr 2020 18:38:24.583 # key 'test' found in DB containing the following object:
1:M 05 Apr 2020 18:38:24.583 # Object type: 0
1:M 05 Apr 2020 18:38:24.583 # Object encoding: 0
1:M 05 Apr 2020 18:38:24.583 # Object refcount: 1
1:M 05 Apr 2020 18:38:24.583 # Object raw string len: 20
1:M 05 Apr 2020 18:38:24.583 # Object raw string content: "-7637260132274145208"
redis_1  |
------ REGISTERS ------
1:M 05 Apr 2020 18:38:24.583 #
RAX:0000000000000000 RBX:0000000000000000
RCX:0000000000000000 RDX:0000000000000000
RDI:0000000000000002 RSI:00007ffca62dcac0
RBP:00007fa354fbfd88 RSP:00007ffca62dcbe0
R8 :0000000000000000 R9 :00007ffca62dcac0
R10:0000000000000008 R11:0000000000000246
R12:0000000000000000 R13:0000000000000046
R14:0000000000000009 R15:0000000000000001
RIP:00007fa355b70611 EFL:0000000000010246
CSGSFS:002b000000000033
1:M 05 Apr 2020 18:38:24.583 # (00007ffca62dcbef) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.583 # (00007ffca62dcbee) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbed) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbec) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbeb) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbea) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe9) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe8) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe7) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe6) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe5) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe4) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe3) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe2) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe1) -> 0000000000000000
1:M 05 Apr 2020 18:38:24.584 # (00007ffca62dcbe0) -> 0000000000000020
redis_1  |
------ FAST MEMORY TEST ------
1:M 05 Apr 2020 18:38:24.584 # Bio thread for job type #0 terminated
1:M 05 Apr 2020 18:38:24.584 # Bio thread for job type #1 terminated
1:M 05 Apr 2020 18:38:24.585 # Bio thread for job type #2 terminated
*** Preparing to test memory region 55741d90a000 (2248704 bytes)
*** Preparing to test memory region 55741e036000 (135168 bytes)
*** Preparing to test memory region 7fa3536ec000 (8388608 bytes)
*** Preparing to test memory region 7fa353eed000 (8388608 bytes)
*** Preparing to test memory region 7fa3546ee000 (8388608 bytes)
*** Preparing to test memory region 7fa355200000 (8388608 bytes)
*** Preparing to test memory region 7fa355b48000 (24576 bytes)
*** Preparing to test memory region 7fa355d0b000 (16384 bytes)
*** Preparing to test memory region 7fa355d2c000 (16384 bytes)
*** Preparing to test memory region 7fa355ec2000 (8192 bytes)
*** Preparing to test memory region 7fa355eef000 (4096 bytes)
.O.O.O.O.O.O.O.O.O.O.O
Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.
redis_1  |
------ DUMPING CODE AROUND EIP ------
Symbol: abort (base: 0x7fa355b70414)
Module: /lib/x86_64-linux-gnu/libc.so.6 (base 0x7fa355b4e000)
$ xxd -r -p /tmp/dump.hex /tmp/dump.bin
$ objdump --adjust-vma=0x7fa355b70414 -D -b binary -m i386:x86-64 /tmp/dump.bin
------
1:M 05 Apr 2020 18:38:24.737 # dump of function (hexdump of 637 bytes):
4881ec2801000064488b042528000000488984241801000031c064488b142510000000483915faa819007444be01000000833da4e2190000740cf00fb135daa81900750beb230fb135cfa81900741a488d3dc6a819004881ec80000000e88a420e004881c480000000488915b4a81900ff05aaa81900833dafa81900007536c705a3a8190001000000b81000000048ffc84889e64883f8ff740a48c704c600000000ebea31d2bf0100000048830c2420e8875501008b0571a8190083f8010f85bd000000ff0d56a81900c70558a8190000000000754148c70543a8190000000000833df4e1190000740bf0ff0d2ba81900750aeb22ff0d21a81900741a488d3d18a819004881ec80000000e80c420e004881c480000000bf06000000e87b51010064488b142510000000483915f3a719007446be0100000031c0833d9be1190000740cf00fb135d1a71900750beb230fb135c6a71900741a488d3dbda719004881ec80000000e881410e004881c480000000488915aba71900ff05a1a71900eb0583f8027547488db4248000000031c0b9260000004883caffc70589a71900030000004889f7f3abb810000000488914c648ffc875f731d2bf06000000c784240801000000000000e83f540100833d58a71900037514c7054ca7190004000000bf06000000e8b2500100833d3ba7190004750bc7052fa7190005000000f4833d27a71900057514c7051ba7190006000000bf7f000000e871430a00f4ebfde8ddfdffffe8d8fdffffe8d3fdffffe8cefdffffe8c9fdffffe8c4fdffffe8bffdffffe8bafdffff836374ebf70300800000753f488b9388000000836a0401753248c7420800000000833d6ee01900007407f0ff0a7506eb1aff0a7416488d
Function at 0x7fa355b85a50 is sigprocmask
Function at 0x7fa355b856b0 is gsignal
Function at 0x7fa355b85a20 is sigaction
Function at 0x7fa355c149a0 is _exit
Function at 0x7fa355b70414 is abort
redis_1  |
=== REDIS BUG REPORT END. Make sure to include from START to END. ===

PR Accept for Resetting the Keys / Removing the Existing Keys

Hi,
I have started using the module to integrate with our deployment. I am not 100 percent sure but so far from the Observed Behaviour, it seems that once the CL.THROTTLE commands gets executed the Key Entry is made, and subsequent CL.THROTTLE, use the entry to process the rate limit quota.
Working On A PR for the following, but wanted to discuss before hand if I have interpreted this correctly:-

1.Removing a Key Altogether
2.Resetting / Updating a key with new Rate Limit Tuple

Waiting for your response
Thanks
Romit

Clarification on count & period

Is there a difference in behaviour between the following 2 invocations?

CL.THROTTLE user123 15 1 2 1
CL.THROTTLE user123 15 30 60 1

Modernize rustfmt conventions

Rustfmt has changed quite a lot since I wrote this originally, and the project could stand to be run through a more modern version of Rustfmt so that contributors with save hooks don't automatically produce a 1000-line diff including all kinds of unrelated code.

We could also use a Travis check to make sure that any newly contributed code is a Rustfmt-compliant.

Feature needed. Queued requests

It would be great to have feature like Nginx ngx_http_limit_req_module module, when some number of requests, that overlap rate limit, are not rejected immediately, but are slowed (queued) a little to fit required rate.

If it possible, make Redis command CL.THROTTLE blocked within burst number, if it has delay option passed. Just like Nginx's burst and nodelay options.

I dont know how to make this scenario within only client app.

Should provide flags for redis 6

I've noticed that the list of flags returned in the 7th value of COMMAND for cl.throttle in redis 6.x is empty. I think @Write and @fast apply.

This may be related to issues I'm having with redis-cell and lettuce on redis 6, but I'm not sure: redis/lettuce#1327

I'm not able to determine from the documentation what the new flag list is; so if anyone knows that would be helpful; otherwise I'll update this ticket when I find out:

 72) 1) "cl.throttle"
     2) (integer) -1
     3) 1) write
     4) (integer) 0
     5) (integer) 0
     6) (integer) 0
     7) (empty array)

Decouple redismodule - rust integration from redis-cell

Hi @brandur,

I don't have much experience in rust however I am wondering how complex would be to decouple the part of the code base that interact with redis from your module.

Ideally I wonder how much work would be to create a crate to simply plug inside the dependencies.

Best,

Simone

does not work when parameters are large

I downloaded the latest version and used it with redis server version 5.0.7, I tested cl.throttle command with both small and large parameters (small capacity + refilling speed, large capacity + refilling speed). The result shows that when capacity and refilling speed are both large, like 6000 at most, and every second 6000 (kind like a tps throttling), it does not work at all. Besides, cl.throttle a 6000 6000 1, gives the third parameter larger than 6000 capacity, very odd.

import redis
import time
client = redis.StrictRedis()

def start_throttle():
    n = 0
    while True:
        n += 1
        result = client.execute_command("cl.throttle", "a", 5, 3, 1)
# in another test, I used 6000, 6000, 1
        print n, result

Time window calculation problem

CL.THROTTLE user123 1 120 60 means 120 tokens on the key are allowed over a 60 second period,but i test it's 2 tokens on the key are allowed over a 1 second exactly, 3 tokens over a 1 second will fail 。

thanks!

CL.THROTTLE command does not trigger snapshot creation for RDB persistence.

How to reproduce:

  1. Start Redis server
redis-server --loadmodule libredis_cell.dylib --save 10 1
48665:C 21 Feb 2021 12:13:51.776 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
48665:C 21 Feb 2021 12:13:51.776 # Redis version=6.0.10, bits=64, commit=00000000, modified=0, pid=48665, just started
48665:C 21 Feb 2021 12:13:51.776 # Configuration loaded
48665:M 21 Feb 2021 12:13:51.777 * Increased maximum number of open files to 10032 (it was originally set to 256).
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 6.0.10 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 48665
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

48665:M 21 Feb 2021 12:13:51.778 # Server initialized
48665:M 21 Feb 2021 12:13:51.779 * Module 'redis-cell' loaded from libredis_cell.dylib
48665:M 21 Feb 2021 12:13:51.780 * Loading RDB produced by version 6.0.10
48665:M 21 Feb 2021 12:13:51.780 * RDB age 46 seconds
48665:M 21 Feb 2021 12:13:51.780 * RDB memory usage when created 0.96 Mb
48665:M 21 Feb 2021 12:13:51.780 * DB loaded from disk: 0.000 seconds
48665:M 21 Feb 2021 12:13:51.780 * Ready to accept connections
  1. Execute command
127.0.0.1:6379> keys *
(empty array)
127.0.0.1:6379> CL.THROTTLE user123 15 1 600 1
1) (integer) 0
2) (integer) 16
3) (integer) 15
4) (integer) -1
5) (integer) 600
127.0.0.1:6379> keys *
1) "user123"

The RDB snapshot is expected to be created but it is not.

  1. Now let's try to execute simple SET command:
127.0.0.1:6379> set key value
OK
127.0.0.1:6379> keys *
1) "key"
2) "user123"
  1. DB saved as expected.
48665:M 21 Feb 2021 12:15:26.450 * 1 changes in 10 seconds. Saving...
48665:M 21 Feb 2021 12:15:26.450 * Background saving started by pid 48667
48667:C 21 Feb 2021 12:15:26.452 * DB saved on disk

The same occurs also in Redis for Docker version 5/6. I'm not sure is that Redis or this module issue.

redis-cell is rounding down

It would be better, if the seconds would get rounded up instead of down.
I also don't understand why the limit is max_burst+1 (instead of max_burst).

Response type not string compatible.

I have a rust app that is using redis and redis-cell. I am using the bb8 crate for connection pooling. When I put the server under heavy load, there are errors in my log. I'm unsure if this is a problem in redis-cell or bb8 or my code.

2022-08-10T21:46:09.361897Z  WARN web3_proxy::bb8_helpers: redis error err=Response was of incompatible type: "Response type not string compatible." (response was bulk(int(0), int(2000001), int(1999997), int(-1), int(0)))

Any ideas?

Publish on crates.io?

Hi,

I would like to use this library in my project. Basically I want a coupled/embedded rate limiter via in memory Store.

But I can't find it on crates.io. Is there any reason for that?

Production readiness

I was wondering if this was used at Stripe in Production ?

I saw this article from Stripe blog on Rate Limiting mentioning the usage of Redis and was curious to see if there was any plan to leverage redis-cell ?

Cluster & Client Support

I've got this up and running for some basic tests. I'm pretty excited about what it brings to the table. Thank you!

Are you aware of this being implemented in a redis cluster scenario and how it may have performed? Also how does a redis client like predis support this new call?

A redis-cell Questions about strings

I have ran into a problem using redis-cell. As shown in the code, passing a string variable will throw an error of "Cell error: invalid digit found in string" but passing a string literal does the job.
To my knowledge, using a string variable should be the same as using a string literal. Is there any different interpretation in redis-cell or is there any difference in golang that causes this problem?
How can I use a string variable?
Thanks

func test() {
key := "example"
cli := redis.NewClient(&redis.Options{
Addr: "127.0.0.1:6379",
Password: "",
DB: 0,
})
_, err := cli.Ping().Result()
if err != nil {
return nil, fmt.Errorf("redis ping failed,err is:%s", err.Error())
}
_, err = control.redisCli.Do("cl.throttle", key, 89, 90, 10, 1).Result() //fail,this err is "Cell error: invalid digit found in string"
if err != nil {
return
}
_, err = control.redisCli.Do("cl.throttle", "111" + key, 89, 90, 10, 1).Result() // success
if err != nil {
return
}
_, err = control.redisCli.Do("cl.throttle", "example", 89, 90, 10, 1).Result() //success
if err != nil {
return
}
return
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.