Giter Club home page Giter Club logo

bottleneck's Introduction

bottleneck

Downloads version License

Bottleneck is a lightweight and zero-dependency Task Scheduler and Rate Limiter for Node.js and the browser.

Bottleneck is an easy solution as it adds very little complexity to your code. It is battle-hardened, reliable and production-ready and used on a large scale in private companies and open source software.

It supports Clustering: it can rate limit jobs across multiple Node.js instances. It uses Redis and strictly atomic operations to stay reliable in the presence of unreliable clients and networks. It also supports Redis Cluster and Redis Sentinel.

Upgrading from version 1?

Install

npm install --save bottleneck
import Bottleneck from "bottleneck";

// Note: To support older browsers and Node <6.0, you must import the ES5 bundle instead.
var Bottleneck = require("bottleneck/es5");

Quick Start

Step 1 of 3

Most APIs have a rate limit. For example, to execute 3 requests per second:

const limiter = new Bottleneck({
  minTime: 333
});

If there's a chance some requests might take longer than 333ms and you want to prevent more than 1 request from running at a time, add maxConcurrent: 1:

const limiter = new Bottleneck({
  maxConcurrent: 1,
  minTime: 333
});

minTime and maxConcurrent are enough for the majority of use cases. They work well together to ensure a smooth rate of requests. If your use case requires executing requests in bursts or every time a quota resets, look into Reservoir Intervals.

Step 2 of 3

➤ Using promises?

Instead of this:

myFunction(arg1, arg2)
.then((result) => {
  /* handle result */
});

Do this:

limiter.schedule(() => myFunction(arg1, arg2))
.then((result) => {
  /* handle result */
});

Or this:

const wrapped = limiter.wrap(myFunction);

wrapped(arg1, arg2)
.then((result) => {
  /* handle result */
});

➤ Using async/await?

Instead of this:

const result = await myFunction(arg1, arg2);

Do this:

const result = await limiter.schedule(() => myFunction(arg1, arg2));

Or this:

const wrapped = limiter.wrap(myFunction);

const result = await wrapped(arg1, arg2);

➤ Using callbacks?

Instead of this:

someAsyncCall(arg1, arg2, callback);

Do this:

limiter.submit(someAsyncCall, arg1, arg2, callback);

Step 3 of 3

Remember...

Bottleneck builds a queue of jobs and executes them as soon as possible. By default, the jobs will be executed in the order they were received.

Read the 'Gotchas' and you're good to go. Or keep reading to learn about all the fine tuning and advanced options available. If your rate limits need to be enforced across a cluster of computers, read the Clustering docs.

Need help debugging your application?

Instead of throttling maybe you want to batch up requests into fewer calls?

Gotchas & Common Mistakes

  • Make sure the function you pass to schedule() or wrap() only returns once all the work it does has completed.

Instead of this:

limiter.schedule(() => {
  tasksArray.forEach(x => processTask(x));
  // BAD, we return before our processTask() functions are finished processing!
});

Do this:

limiter.schedule(() => {
  const allTasks = tasksArray.map(x => processTask(x));
  // GOOD, we wait until all tasks are done.
  return Promise.all(allTasks);
});
  • If you're passing an object's method as a job, you'll probably need to bind() the object:
// instead of this:
limiter.schedule(object.doSomething);
// do this:
limiter.schedule(object.doSomething.bind(object));
// or, wrap it in an arrow function instead:
limiter.schedule(() => object.doSomething());
  • Bottleneck requires Node 6+ to function. However, an ES5 build is included: var Bottleneck = require("bottleneck/es5");.

  • Make sure you're catching "error" events emitted by your limiters!

  • Consider setting a maxConcurrent value instead of leaving it null. This can help your application's performance, especially if you think the limiter's queue might become very long.

  • If you plan on using priorities, make sure to set a maxConcurrent value.

  • When using submit(), if a callback isn't necessary, you must pass null or an empty function instead. It will not work otherwise.

  • When using submit(), make sure all the jobs will eventually complete by calling their callback, or set an expiration. Even if you submitted your job with a null callback , it still needs to call its callback. This is particularly important if you are using a maxConcurrent value that isn't null (unlimited), otherwise those not completed jobs will be clogging up the limiter and no new jobs will be allowed to run. It's safe to call the callback more than once, subsequent calls are ignored.

  • Using tools like mockdate in your tests to change time in JavaScript will likely result in undefined behavior from Bottleneck.

Docs

Constructor

const limiter = new Bottleneck({/* options */});

Basic options:

Option Default Description
maxConcurrent null (unlimited) How many jobs can be executing at the same time. Consider setting a value instead of leaving it null, it can help your application's performance, especially if you think the limiter's queue might get very long.
minTime 0 ms How long to wait after launching a job before launching another one.
highWater null (unlimited) How long can the queue be? When the queue length exceeds that value, the selected strategy is executed to shed the load.
strategy Bottleneck.strategy.LEAK Which strategy to use when the queue gets longer than the high water mark. Read about strategies. Strategies are never executed if highWater is null.
penalty 15 * minTime, or 5000 when minTime is 0 The penalty value used by the BLOCK strategy.
reservoir null (unlimited) How many jobs can be executed before the limiter stops executing jobs. If reservoir reaches 0, no jobs will be executed until it is no longer 0. New jobs will still be queued up.
reservoirRefreshInterval null (disabled) Every reservoirRefreshInterval milliseconds, the reservoir value will be automatically updated to the value of reservoirRefreshAmount. The reservoirRefreshInterval value should be a multiple of 250 (5000 for Clustering).
reservoirRefreshAmount null (disabled) The value to set reservoir to when reservoirRefreshInterval is in use.
reservoirIncreaseInterval null (disabled) Every reservoirIncreaseInterval milliseconds, the reservoir value will be automatically incremented by reservoirIncreaseAmount. The reservoirIncreaseInterval value should be a multiple of 250 (5000 for Clustering).
reservoirIncreaseAmount null (disabled) The increment applied to reservoir when reservoirIncreaseInterval is in use.
reservoirIncreaseMaximum null (disabled) The maximum value that reservoir can reach when reservoirIncreaseInterval is in use.
Promise Promise (built-in) This lets you override the Promise library used by Bottleneck.

Reservoir Intervals

Reservoir Intervals let you execute requests in bursts, by automatically controlling the limiter's reservoir value. The reservoir is simply the number of jobs the limiter is allowed to execute. Once the value reaches 0, it stops starting new jobs.

There are 2 types of Reservoir Intervals: Refresh Intervals and Increase Intervals.

Refresh Interval

In this example, we throttle to 100 requests every 60 seconds:

const limiter = new Bottleneck({
  reservoir: 100, // initial value
  reservoirRefreshAmount: 100,
  reservoirRefreshInterval: 60 * 1000, // must be divisible by 250

  // also use maxConcurrent and/or minTime for safety
  maxConcurrent: 1,
  minTime: 333 // pick a value that makes sense for your use case
});

reservoir is a counter decremented every time a job is launched, we set its initial value to 100. Then, every reservoirRefreshInterval (60000 ms), reservoir is automatically updated to be equal to the reservoirRefreshAmount (100).

Increase Interval

In this example, we throttle jobs to meet the Shopify API Rate Limits. Users are allowed to send 40 requests initially, then every second grants 2 more requests up to a maximum of 40.

const limiter = new Bottleneck({
  reservoir: 40, // initial value
  reservoirIncreaseAmount: 2,
  reservoirIncreaseInterval: 1000, // must be divisible by 250
  reservoirIncreaseMaximum: 40,

  // also use maxConcurrent and/or minTime for safety
  maxConcurrent: 5,
  minTime: 250 // pick a value that makes sense for your use case
});

Warnings

Reservoir Intervals are an advanced feature, please take the time to read and understand the following warnings.

  • Reservoir Intervals are not a replacement for minTime and maxConcurrent. It's strongly recommended to also use minTime and/or maxConcurrent to spread out the load. For example, suppose a lot of jobs are queued up because the reservoir is 0. Every time the Refresh Interval is triggered, a number of jobs equal to reservoirRefreshAmount will automatically be launched, all at the same time! To prevent this flooding effect and keep your application running smoothly, use minTime and maxConcurrent to stagger the jobs.

  • The Reservoir Interval starts from the moment the limiter is created. Let's suppose we're using reservoirRefreshAmount: 5. If you happen to add 10 jobs just 1ms before the refresh is triggered, the first 5 will run immediately, then 1ms later it will refresh the reservoir value and that will make the last 5 also run right away. It will have run 10 jobs in just over 1ms no matter what your reservoir interval was!

  • Reservoir Intervals prevent a limiter from being garbage collected. Call limiter.disconnect() to clear the interval and allow the memory to be freed. However, it's not necessary to call .disconnect() to allow the Node.js process to exit.

submit()

Adds a job to the queue. This is the callback version of schedule().

limiter.submit(someAsyncCall, arg1, arg2, callback);

You can pass null instead of an empty function if there is no callback, but someAsyncCall still needs to call its callback to let the limiter know it has completed its work.

submit() can also accept advanced options.

schedule()

Adds a job to the queue. This is the Promise and async/await version of submit().

const fn = function(arg1, arg2) {
  return httpGet(arg1, arg2); // Here httpGet() returns a promise
};

limiter.schedule(fn, arg1, arg2)
.then((result) => {
  /* ... */
});

In other words, schedule() takes a function fn and a list of arguments. schedule() returns a promise that will be executed according to the rate limits.

schedule() can also accept advanced options.

Here's another example:

// suppose that `client.get(url)` returns a promise

const url = "https://wikipedia.org";

limiter.schedule(() => client.get(url))
.then(response => console.log(response.body));

wrap()

Takes a function that returns a promise. Returns a function identical to the original, but rate limited.

const wrapped = limiter.wrap(fn);

wrapped()
.then(function (result) {
  /* ... */
})
.catch(function (error) {
  // Bottleneck might need to fail the job even if the original function can never fail.
  // For example, your job is taking longer than the `expiration` time you've set.
});

Job Options

submit(), schedule(), and wrap() all accept advanced options.

// Submit
limiter.submit({/* options */}, someAsyncCall, arg1, arg2, callback);

// Schedule
limiter.schedule({/* options */}, fn, arg1, arg2);

// Wrap
const wrapped = limiter.wrap(fn);
wrapped.withOptions({/* options */}, arg1, arg2);
Option Default Description
priority 5 A priority between 0 and 9. A job with a priority of 4 will be queued ahead of a job with a priority of 5. Important: You must set a low maxConcurrent value for priorities to work, otherwise there is nothing to queue because jobs will be be scheduled immediately!
weight 1 Must be an integer equal to or higher than 0. The weight is what increases the number of running jobs (up to maxConcurrent) and decreases the reservoir value.
expiration null (unlimited) The number of milliseconds a job is given to complete. Jobs that execute for longer than expiration ms will be failed with a BottleneckError.
id <no-id> You should give an ID to your jobs, it helps with debugging.

Strategies

A strategy is a simple algorithm that is executed every time adding a job would cause the number of queued jobs to exceed highWater. Strategies are never executed if highWater is null.

Bottleneck.strategy.LEAK

When adding a new job to a limiter, if the queue length reaches highWater, drop the oldest job with the lowest priority. This is useful when jobs that have been waiting for too long are not important anymore. If all the queued jobs are more important (based on their priority value) than the one being added, it will not be added.

Bottleneck.strategy.OVERFLOW_PRIORITY

Same as LEAK, except it will only drop jobs that are less important than the one being added. If all the queued jobs are as or more important than the new one, it will not be added.

Bottleneck.strategy.OVERFLOW

When adding a new job to a limiter, if the queue length reaches highWater, do not add the new job. This strategy totally ignores priority levels.

Bottleneck.strategy.BLOCK

When adding a new job to a limiter, if the queue length reaches highWater, the limiter falls into "blocked mode". All queued jobs are dropped and no new jobs will be accepted until the limiter unblocks. It will unblock after penalty milliseconds have passed without receiving a new job. penalty is equal to 15 * minTime (or 5000 if minTime is 0) by default. This strategy is ideal when bruteforce attacks are to be expected. This strategy totally ignores priority levels.

Jobs lifecycle

  1. Received. Your new job has been added to the limiter. Bottleneck needs to check whether it can be accepted into the queue.
  2. Queued. Bottleneck has accepted your job, but it can not tell at what exact timestamp it will run yet, because it is dependent on previous jobs.
  3. Running. Your job is not in the queue anymore, it will be executed after a delay that was computed according to your minTime setting.
  4. Executing. Your job is executing its code.
  5. Done. Your job has completed.

Note: By default, Bottleneck does not keep track of DONE jobs, to save memory. You can enable this feature by passing trackDoneStatus: true as an option when creating a limiter.

counts()

const counts = limiter.counts();

console.log(counts);
/*
{
  RECEIVED: 0,
  QUEUED: 0,
  RUNNING: 0,
  EXECUTING: 0,
  DONE: 0
}
*/

Returns an object with the current number of jobs per status in the limiter.

jobStatus()

console.log(limiter.jobStatus("some-job-id"));
// Example: QUEUED

Returns the status of the job with the provided job id in the limiter. Returns null if no job with that id exist.

jobs()

console.log(limiter.jobs("RUNNING"));
// Example: ['id1', 'id2']

Returns an array of all the job ids with the specified status in the limiter. Not passing a status string returns all the known ids.

queued()

const count = limiter.queued(priority);

console.log(count);

priority is optional. Returns the number of QUEUED jobs with the given priority level. Omitting the priority argument returns the total number of queued jobs in the limiter.

clusterQueued()

const count = await limiter.clusterQueued();

console.log(count);

Returns the number of QUEUED jobs in the Cluster.

empty()

if (limiter.empty()) {
  // do something...
}

Returns a boolean which indicates whether there are any RECEIVED or QUEUED jobs in the limiter.

running()

limiter.running()
.then((count) => console.log(count));

Returns a promise that returns the total weight of the RUNNING and EXECUTING jobs in the Cluster.

done()

limiter.done()
.then((count) => console.log(count));

Returns a promise that returns the total weight of DONE jobs in the Cluster. Does not require passing the trackDoneStatus: true option.

check()

limiter.check()
.then((wouldRunNow) => console.log(wouldRunNow));

Checks if a new job would be executed immediately if it was submitted now. Returns a promise that returns a boolean.

Events

'error'

limiter.on("error", function (error) {
  /* handle errors here */
});

The two main causes of error events are: uncaught exceptions in your event handlers, and network errors when Clustering is enabled.

'failed'

limiter.on("failed", function (error, jobInfo) {
  // This will be called every time a job fails.
});

'retry'

See Retries to learn how to automatically retry jobs.

limiter.on("retry", function (message, jobInfo) {
  // This will be called every time a job is retried.
});

'empty'

limiter.on("empty", function () {
  // This will be called when `limiter.empty()` becomes true.
});

'idle'

limiter.on("idle", function () {
  // This will be called when `limiter.empty()` is `true` and `limiter.running()` is `0`.
});

'dropped'

limiter.on("dropped", function (dropped) {
  // This will be called when a strategy was triggered.
  // The dropped request is passed to this event listener.
});

'depleted'

limiter.on("depleted", function (empty) {
  // This will be called every time the reservoir drops to 0.
  // The `empty` (boolean) argument indicates whether `limiter.empty()` is currently true.
});

'debug'

limiter.on("debug", function (message, data) {
  // Useful to figure out what the limiter is doing in real time
  // and to help debug your application
});

'received' 'queued' 'scheduled' 'executing' 'done'

limiter.on("queued", function (info) {
  // This event is triggered when a job transitions from one Lifecycle stage to another
});

See Jobs Lifecycle for more information.

These Lifecycle events are not triggered for jobs located on another limiter in a Cluster, for performance reasons.

Other event methods

Use removeAllListeners() with an optional event name as first argument to remove listeners.

Use .once() instead of .on() to only receive a single event.

Retries

The following example:

const limiter = new Bottleneck();

// Listen to the "failed" event
limiter.on("failed", async (error, jobInfo) => {
  const id = jobInfo.options.id;
  console.warn(`Job ${id} failed: ${error}`);

  if (jobInfo.retryCount === 0) { // Here we only retry once
    console.log(`Retrying job ${id} in 25ms!`);
    return 25;
  }
});

// Listen to the "retry" event
limiter.on("retry", (error, jobInfo) => console.log(`Now retrying ${jobInfo.options.id}`));

const main = async function () {
  let executions = 0;

  // Schedule one job
  const result = await limiter.schedule({ id: 'ABC123' }, async () => {
    executions++;
    if (executions === 1) {
      throw new Error("Boom!");
    } else {
      return "Success!";
    }
  });

  console.log(`Result: ${result}`);
}

main();

will output

Job ABC123 failed: Error: Boom!
Retrying job ABC123 in 25ms!
Now retrying ABC123
Result: Success!

To re-run your job, simply return an integer from the 'failed' event handler. The number returned is how many milliseconds to wait before retrying it. Return 0 to retry it immediately.

IMPORTANT: When you ask the limiter to retry a job it will not send it back into the queue. It will stay in the EXECUTING state until it succeeds or until you stop retrying it. This means that it counts as a concurrent job for maxConcurrent even while it's just waiting to be retried. The number of milliseconds to wait ignores your minTime settings.

updateSettings()

limiter.updateSettings(options);

The options are the same as the limiter constructor.

Note: Changes don't affect SCHEDULED jobs.

incrementReservoir()

limiter.incrementReservoir(incrementBy);

Returns a promise that returns the new reservoir value.

currentReservoir()

limiter.currentReservoir()
.then((reservoir) => console.log(reservoir));

Returns a promise that returns the current reservoir value.

stop()

The stop() method is used to safely shutdown a limiter. It prevents any new jobs from being added to the limiter and waits for all EXECUTING jobs to complete.

limiter.stop(options)
.then(() => {
  console.log("Shutdown completed!")
});

stop() returns a promise that resolves once all the EXECUTING jobs have completed and, if desired, once all non-EXECUTING jobs have been dropped.

Option Default Description
dropWaitingJobs true When true, drop all the RECEIVED, QUEUED and RUNNING jobs. When false, allow those jobs to complete before resolving the Promise returned by this method.
dropErrorMessage This limiter has been stopped. The error message used to drop jobs when dropWaitingJobs is true.
enqueueErrorMessage This limiter has been stopped and cannot accept new jobs. The error message used to reject a job added to the limiter after stop() has been called.

chain()

Tasks that are ready to be executed will be added to that other limiter. Suppose you have 2 types of tasks, A and B. They both have their own limiter with their own settings, but both must also follow a global limiter G:

const limiterA = new Bottleneck( /* some settings */ );
const limiterB = new Bottleneck( /* some different settings */ );
const limiterG = new Bottleneck( /* some global settings */ );

limiterA.chain(limiterG);
limiterB.chain(limiterG);

// Requests added to limiterA must follow the A and G rate limits.
// Requests added to limiterB must follow the B and G rate limits.
// Requests added to limiterG must follow the G rate limits.

To unchain, call limiter.chain(null);.

Group

The Group feature of Bottleneck manages many limiters automatically for you. It creates limiters dynamically and transparently.

Let's take a DNS server as an example of how Bottleneck can be used. It's a service that sees a lot of abuse and where incoming DNS requests need to be rate limited. Bottleneck is so tiny, it's acceptable to create one limiter for each origin IP, even if it means creating thousands of limiters. The Group feature is perfect for this use case. Create one Group and use the origin IP to rate limit each IP independently. Each call with the same key (IP) will be routed to the same underlying limiter. A Group is created like a limiter:

const group = new Bottleneck.Group(options);

The options object will be used for every limiter created by the Group.

The Group is then used with the .key(str) method:

// In this example, the key is an IP
group.key("77.66.54.32").schedule(() => {
  /* process the request */
});

key()

  • str : The key to use. All jobs added with the same key will use the same underlying limiter. Default: ""

The return value of .key(str) is a limiter. If it doesn't already exist, it is generated for you. Calling key() is how limiters are created inside a Group.

Limiters that have been idle for longer than 5 minutes are deleted to avoid memory leaks, this value can be changed by passing a different timeout option, in milliseconds.

on("created")

group.on("created", (limiter, key) => {
  console.log("A new limiter was created for key: " + key)

  // Prepare the limiter, for example we'll want to listen to its "error" events!
  limiter.on("error", (err) => {
    // Handle errors here
  })
});

Listening for the "created" event is the recommended way to set up a new limiter. Your event handler is executed before key() returns the newly created limiter.

updateSettings()

const group = new Bottleneck.Group({ maxConcurrent: 2, minTime: 250 });
group.updateSettings({ minTime: 500 });

After executing the above commands, new limiters will be created with { maxConcurrent: 2, minTime: 500 }.

deleteKey()

  • str: The key for the limiter to delete.

Manually deletes the limiter at the specified key. When using Clustering, the Redis data is immediately deleted and the other Groups in the Cluster will eventually delete their local key automatically, unless it is still being used.

keys()

Returns an array containing all the keys in the Group.

clusterKeys()

Same as group.keys(), but returns all keys in this Group ID across the Cluster.

limiters()

const limiters = group.limiters();

console.log(limiters);
// [ { key: "some key", limiter: <limiter> }, { key: "some other key", limiter: <some other limiter> } ]

Batching

Some APIs can accept multiple operations in a single call. Bottleneck's Batching feature helps you take advantage of those APIs:

const batcher = new Bottleneck.Batcher({
  maxTime: 1000,
  maxSize: 10
});

batcher.on("batch", (batch) => {
  console.log(batch); // ["some-data", "some-other-data"]

  // Handle batch here
});

batcher.add("some-data");
batcher.add("some-other-data");

batcher.add() returns a Promise that resolves once the request has been flushed to a "batch" event.

Option Default Description
maxTime null (unlimited) Maximum acceptable time (in milliseconds) a request can have to wait before being flushed to the "batch" event.
maxSize null (unlimited) Maximum number of requests in a batch.

Batching doesn't throttle requests, it only groups them up optimally according to your maxTime and maxSize settings.

Clustering

Clustering lets many limiters access the same shared state, stored in Redis. Changes to the state are Atomic, Consistent and Isolated (and fully ACID with the right Durability configuration), to eliminate any chances of race conditions or state corruption. Your settings, such as maxConcurrent, minTime, etc., are shared across the whole cluster, which means —for example— that { maxConcurrent: 5 } guarantees no more than 5 jobs can ever run at a time in the entire cluster of limiters. 100% of Bottleneck's features are supported in Clustering mode. Enabling Clustering is as simple as changing a few settings. It's also a convenient way to store or export state for later use.

Bottleneck will attempt to spread load evenly across limiters.

Enabling Clustering

First, add redis or ioredis to your application's dependencies:

# NodeRedis (https://github.com/NodeRedis/node_redis)
npm install --save redis

# or ioredis (https://github.com/luin/ioredis)
npm install --save ioredis

Then create a limiter or a Group:

const limiter = new Bottleneck({
  /* Some basic options */
  maxConcurrent: 5,
  minTime: 500
  id: "my-super-app" // All limiters with the same id will be clustered together

  /* Clustering options */
  datastore: "redis", // or "ioredis"
  clearDatastore: false,
  clientOptions: {
    host: "127.0.0.1",
    port: 6379

    // Redis client options
    // Using NodeRedis? See https://github.com/NodeRedis/node_redis#options-object-properties
    // Using ioredis? See https://github.com/luin/ioredis/blob/master/API.md#new-redisport-host-options
  }
});
Option Default Description
datastore "local" Where the limiter stores its internal state. The default ("local") keeps the state in the limiter itself. Set it to "redis" or "ioredis" to enable Clustering.
clearDatastore false When set to true, on initial startup, the limiter will wipe any existing Bottleneck state data on the Redis db.
clientOptions {} This object is passed directly to the redis client library you've selected.
clusterNodes null ioredis only. When clusterNodes is not null, the client will be instantiated by calling new Redis.Cluster(clusterNodes, clientOptions) instead of new Redis(clientOptions).
timeout null (no TTL) The Redis TTL in milliseconds (TTL) for the keys created by the limiter. When timeout is set, the limiter's state will be automatically removed from Redis after timeout milliseconds of inactivity.
Redis null Overrides the import/require of the redis/ioredis library. You shouldn't need to set this option unless your application is failing to start due to a failure to require/import the client library.

Note: When using Groups, the timeout option has a default of 300000 milliseconds and the generated limiters automatically receive an id with the pattern ${group.id}-${KEY}.

Note: If you are seeing a runtime error due to the require() function not being able to load redis/ioredis, then directly pass the module as the Redis option. Example:

import Redis from "ioredis"

const limiter = new Bottleneck({
  id: "my-super-app",
  datastore: "ioredis",
  clientOptions: { host: '12.34.56.78', port: 6379 },
  Redis
});

Unfortunately, this is a side effect of having to disable inlining, which is necessary to make Bottleneck easy to use in the browser.

Important considerations when Clustering

The first limiter connecting to Redis will store its constructor options on Redis and all subsequent limiters will be using those settings. You can alter the constructor options used by all the connected limiters by calling updateSettings(). The clearDatastore option instructs a new limiter to wipe any previous Bottleneck data (for that id), including previously stored settings.

Queued jobs are NOT stored on Redis. They are local to each limiter. Exiting the Node.js process will lose those jobs. This is because Bottleneck has no way to propagate the JS code to run a job across a different Node.js process than the one it originated on. Bottleneck doesn't keep track of the queue contents of the limiters on a cluster for performance and reliability reasons. You can use something like BeeQueue in addition to Bottleneck to get around this limitation.

Due to the above, functionality relying on the queue length happens purely locally:

  • Priorities are local. A higher priority job will run before a lower priority job on the same limiter. Another limiter on the cluster might run a lower priority job before our higher priority one.
  • Assuming constant priority levels, Bottleneck guarantees that jobs will be run in the order they were received on the same limiter. Another limiter on the cluster might run a job received later before ours runs.
  • highWater and load shedding (strategies) are per limiter. However, one limiter entering Blocked mode will put the entire cluster in Blocked mode until penalty milliseconds have passed. See Strategies.
  • The "empty" event is triggered when the (local) queue is empty.
  • The "idle" event is triggered when the (local) queue is empty and no jobs are currently running anywhere in the cluster.

You must work around these limitations in your application code if they are an issue to you. The publish() method could be useful here.

The current design guarantees reliability, is highly performant and lets limiters come and go. Your application can scale up or down, and clients can be disconnected at any time without issues.

It is strongly recommended that you give an id to every limiter and Group since it is used to build the name of your limiter's Redis keys! Limiters with the same id inside the same Redis db will be sharing the same datastore.

It is strongly recommended that you set an expiration (See Job Options) on every job, since that lets the cluster recover from crashed or disconnected clients. Otherwise, a client crashing while executing a job would not be able to tell the cluster to decrease its number of "running" jobs. By using expirations, those lost jobs are automatically cleared after the specified time has passed. Using expirations is essential to keeping a cluster reliable in the face of unpredictable application bugs, network hiccups, and so on.

Network latency between Node.js and Redis is not taken into account when calculating timings (such as minTime). To minimize the impact of latency, Bottleneck only performs a single Redis call per lifecycle transition. Keeping the Redis server close to your limiters will help you get a more consistent experience. Keeping the system time consistent across all clients will also help.

It is strongly recommended to set up an "error" listener on all your limiters and on your Groups.

Clustering Methods

The ready(), publish() and clients() methods also exist when using the local datastore, for code compatibility reasons: code written for redis/ioredis won't break with local.

ready()

This method returns a promise that resolves once the limiter is connected to Redis.

As of v2.9.0, it's no longer necessary to wait for .ready() to resolve before issuing commands to a limiter. The commands will be queued until the limiter successfully connects. Make sure to listen to the "error" event to handle connection errors.

const limiter = new Bottleneck({/* options */});

limiter.on("error", (err) => {
  // handle network errors
});

limiter.ready()
.then(() => {
  // The limiter is ready
});

publish(message)

This method broadcasts the message string to every limiter in the Cluster. It returns a promise.

const limiter = new Bottleneck({/* options */});

limiter.on("message", (msg) => {
  console.log(msg); // prints "this is a string"
});

limiter.publish("this is a string");

To send objects, stringify them first:

limiter.on("message", (msg) => {
  console.log(JSON.parse(msg).hello) // prints "world"
});

limiter.publish(JSON.stringify({ hello: "world" }));

clients()

If you need direct access to the redis clients, use .clients():

console.log(limiter.clients());
// { client: <Redis Client>, subscriber: <Redis Client> }

Additional Clustering information

  • Bottleneck is compatible with Redis Clusters, but you must use the ioredis datastore and the clusterNodes option.
  • Bottleneck is compatible with Redis Sentinel, but you must use the ioredis datastore.
  • Bottleneck's data is stored in Redis keys starting with b_. It also uses pubsub channels starting with b_ It will not interfere with any other data stored on the server.
  • Bottleneck loads a few Lua scripts on the Redis server using the SCRIPT LOAD command. These scripts only take up a few Kb of memory. Running the SCRIPT FLUSH command will cause any connected limiters to experience critical errors until a new limiter connects to Redis and loads the scripts again.
  • The Lua scripts are highly optimized and designed to use as few resources as possible.

Managing Redis Connections

Bottleneck needs to create 2 Redis Clients to function, one for normal operations and one for pubsub subscriptions. These 2 clients are kept in a Bottleneck.RedisConnection (NodeRedis) or a Bottleneck.IORedisConnection (ioredis) object, referred to as the Connection object.

By default, every Group and every standalone limiter (a limiter not created by a Group) will create their own Connection object, but it is possible to manually control this behavior. In this example, every Group and limiter is sharing the same Connection object and therefore the same 2 clients:

const connection = new Bottleneck.RedisConnection({
  clientOptions: {/* NodeRedis/ioredis options */}
  // ioredis also accepts `clusterNodes` here
});


const limiter = new Bottleneck({ connection: connection });
const group = new Bottleneck.Group({ connection: connection });

You can access and reuse the Connection object of any Group or limiter:

const group = new Bottleneck.Group({ connection: limiter.connection });

When a Connection object is created manually, the connectivity "error" events are emitted on the Connection itself.

connection.on("error", (err) => { /* handle connectivity errors here */ });

If you already have a NodeRedis/ioredis client, you can ask Bottleneck to reuse it, although currently the Connection object will still create a second client for pubsub operations:

import Redis from "redis";
const client = new Redis.createClient({/* options */});

const connection = new Bottleneck.RedisConnection({
  // `clientOptions` and `clusterNodes` will be ignored since we're passing a raw client
  client: client
});

const limiter = new Bottleneck({ connection: connection });
const group = new Bottleneck.Group({ connection: connection });

Depending on your application, using more clients can improve performance.

Use the disconnect(flush) method to close the Redis clients.

limiter.disconnect();
group.disconnect();

If you created the Connection object manually, you need to call connection.disconnect() instead, for safety reasons.

Debugging your application

Debugging complex scheduling logic can be difficult, especially when priorities, weights, and network latency all interact with one another.

If your application is not behaving as expected, start by making sure you're catching "error" events emitted by your limiters and your Groups. Those errors are most likely uncaught exceptions from your application code.

Make sure you've read the 'Gotchas' section.

To see exactly what a limiter is doing in real time, listen to the "debug" event. It contains detailed information about how the limiter is executing your code. Adding job IDs to all your jobs makes the debug output more readable.

When Bottleneck has to fail one of your jobs, it does so by using BottleneckError objects. This lets you tell those errors apart from your own code's errors:

limiter.schedule(fn)
.then((result) => { /* ... */ } )
.catch((error) => {
  if (error instanceof Bottleneck.BottleneckError) {
    /* ... */
  }
});

Upgrading to v2

The internal algorithms essentially haven't changed from v1, but many small changes to the interface were made to introduce new features.

All the breaking changes:

  • Bottleneck v2 requires Node 6+ or a modern browser. Use require("bottleneck/es5") if you need ES5 support in v2. Bottleneck v1 will continue to use ES5 only.
  • The Bottleneck constructor now takes an options object. See Constructor.
  • The Cluster feature is now called Group. This is to distinguish it from the new v2 Clustering feature.
  • The Group constructor takes an options object to match the limiter constructor.
  • Jobs take an optional options object. See Job options.
  • Removed submitPriority(), use submit() with an options object instead.
  • Removed schedulePriority(), use schedule() with an options object instead.
  • The rejectOnDrop option is now true by default. It can be set to false if you wish to retain v1 behavior. However this option is left undocumented as enabling it is considered to be a poor practice.
  • Use null instead of 0 to indicate an unlimited maxConcurrent value.
  • Use null instead of -1 to indicate an unlimited highWater value.
  • Renamed changeSettings() to updateSettings(), it now returns a promise to indicate completion. It takes the same options object as the constructor.
  • Renamed nbQueued() to queued().
  • Renamed nbRunning to running(), it now returns its result using a promise.
  • Removed isBlocked().
  • Changing the Promise library is now done through the options object like any other limiter setting.
  • Removed changePenalty(), it is now done through the options object like any other limiter setting.
  • Removed changeReservoir(), it is now done through the options object like any other limiter setting.
  • Removed stopAll(). Use the new stop() method.
  • check() now accepts an optional weight argument, and returns its result using a promise.
  • Removed the Group changeTimeout() method. Instead, pass a timeout option when creating a Group.

Version 2 is more user-friendly and powerful.

After upgrading your code, please take a minute to read the Debugging your application chapter.

Contributing

This README is always in need of improvements. If wording can be clearer and simpler, please consider forking this repo and submitting a Pull Request, or simply opening an issue.

Suggestions and bug reports are also welcome.

To work on the Bottleneck code, simply clone the repo, makes your changes to the files located in src/ only, then run ./scripts/build.sh && npm test to ensure that everything is set up correctly.

To speed up compilation time during development, run ./scripts/build.sh dev instead. Make sure to build and test without dev before submitting a PR.

The tests must also pass in Clustering mode and using the ES5 bundle. You'll need a Redis server running locally (latency needs to be minimal to run the tests). If the server isn't using the default hostname and port, you can set those in the .env file. Then run ./scripts/build.sh && npm run test-all.

All contributions are appreciated and will be considered.

bottleneck's People

Contributors

alexperovich avatar cliffkoh avatar copperwall avatar dobesv avatar elliot-nelson avatar gitter-badger avatar maikelmclauflin avatar martin-helmich avatar sgrondin avatar tjenkinson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bottleneck's Issues

Bottleneck v2.x seems to require redis, regardless of new Clustering setting?

Maybe I'm doing something wrong, but I've just updated to v2 in my project and although I have no intention of using clustering, my build is failing with complaints of not being able to resolve redis from node_modules?

Error: ./node_modules/bottleneck/lib/RedisStorage.js
Module not found: Error: Can't resolve 'redis' in 'C:\Users\jamie\Projects\massblock-app\node_modules\bottleneck\lib'
resolve 'redis' in 'C:\Users\jamie\Projects\massblock-app\node_modules\bottleneck\lib'
  Parsed request is a module
  using description file: C:\Users\jamie\Projects\massblock-app\node_modules\bottleneck\package.json (relative path: ./lib)
    Field 'browser' doesn't contain a valid alias configuration
  after using description file: C:\Users\jamie\Projects\massblock-app\node_modules\bottleneck\package.json (relative path: ./lib)
    resolve as module
      looking for modules in C:\Users\jamie\Projects\massblock-app\node_modules
        using description file: C:\Users\jamie\Projects\massblock-app\package.json (relative path: ./node_modules)
          Field 'browser' doesn't contain a valid alias configuration
        after using description file: C:\Users\jamie\Projects\massblock-app\package.json (relative path: ./node_modules)
          using description file: C:\Users\jamie\Projects\massblock-app\package.json (relative path: ./node_modules/redis)
            no extension
              Field 'browser' doesn't contain a valid alias configuration
              C:\Users\jamie\Projects\massblock-app\node_modules\redis doesn't exist
            .ts
              Field 'browser' doesn't contain a valid alias configuration
              C:\Users\jamie\Projects\massblock-app\node_modules\redis.ts doesn't exist
            .js
              Field 'browser' doesn't contain a valid alias configuration
              C:\Users\jamie\Projects\massblock-app\node_modules\redis.js doesn't exist
            .json
              Field 'browser' doesn't contain a valid alias configuration
              C:\Users\jamie\Projects\massblock-app\node_modules\redis.json doesn't exist
            as directory
              C:\Users\jamie\Projects\massblock-app\node_modules\redis doesn't exist
[C:\Users\jamie\Projects\massblock-app\node_modules\redis]
[C:\Users\jamie\Projects\massblock-app\node_modules\redis.ts]
[C:\Users\jamie\Projects\massblock-app\node_modules\redis.js]
[C:\Users\jamie\Projects\massblock-app\node_modules\redis.json]
[C:\Users\jamie\Projects\massblock-app\node_modules\redis]
 @ ./node_modules/bottleneck/lib/RedisStorage.js 77:14-30
 @ ./node_modules/bottleneck/lib/Bottleneck.js
 @ ./node_modules/bottleneck/lib/index.js

Any thoughts?

Allow to handle reservoir exhaustion

The reservoir option would be a great way to handle situation like GitHub rate limiting.
The API maintain X-RateLimit-Remaining which correspond to the concept of the reservoir, X-RateLimit-Reset which defines when the reservoir will be refilled and X-RateLimit-Limit which determine how much to refill the reservoir.

The implementation could be done like that:

  • Create a Bottleneck limiter, setting the reservoir option with the value of X-RateLimit-Remaining
  • When the reservoir is exhausted, determine the time to wait until the rate limit reset, wait for that time and refill the reservoir
  • The Bottleneck would continue to process the jobs

Currently it seems that when the reservoir is exhausted Bottleneck just stop executing the jobs in the queue without throwing an error nor sending an event.
That makes it really difficult to determine when the reservoir is empty, pause the execution until it's refilled and schedule its refill.

Maybe an event could be sent when the reservoir get exhausted, proving a function to schedule its refill. So when this even would be trigger, one could retrieve the amount and time of the next refill and schedule it.

Another option would be to allow to define the frequency and amount of refills in the Bottleneck constructor. Bottleneck would pause execution when the reservoir get exhausted, wait until the refill schedule, refill of the given amount and continue to process tasks.

This first option is probably more complex to handle but as it's more reactive it would handle better the situation where other things consume some of the available requests in the API. The second option is deterministic, so if someone use some of the available request in the API, there would be a gap between the reservoir size in Bottleneck and the X-RateLimit-Remaining on GitHub.

Issue with _sanitizePriority(priority);

Hello!

I was having a hard time getting priorities to work and discovered that priority = this._sanitizePriority(priority); seems to be changing the value of a validly passed priority inside of submitPriority() (which in my application is being called from schedulePriority).

Adding some logging so the function reads like this:

  console.log("TEMP_AJR args[0] in submit = " + arguments[0]);

  priority = arguments[0], task = arguments[1], args = 4 <= arguments.length ? slice.call(arguments, 2, j = arguments.length - 1) : (j = 2, []), cb = arguments[j++];

  console.log("TEMP_AJR pre-sanitize in submit = " + priority);

  priority = this._sanitizePriority(priority);

  console.log("TEMP_AJR sp in submit = " + priority);

logged this:
TEMP_AJR args[0] in submit = 6
TEMP_AJR pre-sanitize in submit = 6
TEMP_AJR sp in submit = 5

I'm a very inexperienced javascript programmer so I expect there's something wrong with how I'm using the library, but maybe there's some actual issue so I figured I'd pass on the note.

Hope this helps!

-Al

runing function on event event only once

Hi
I use the lib to do some throttling, and use 'empty' event to know when can I can put some cool down and retry the procedure.

I tried the once() instead of on(), but it is not supported. also tried to use the return value form limiter.on() but thats the limiter instance.

is there a way to fire the event only once ?

Limiting Functions with Events

How to use bottleneck with events.

If this is my code which does not have a callback.... how do I rate limit it with bottleneck

    const service = google.drive('v3');
    const downPath = '/path/to/file';
    const dest = fs.createWriteStream(downPath);
    service.files.get({
        auth,
        fileId: fileid,
        alt: 'media',
    }).on('end', () => {
        return callback(true);
    }).on('error', (err) => {
        console.log('error');
    }).pipe(dest);

How to save / load the limiter objects' status?

Hi All,
Imagine I wanted to write a command line script in Node that needs to be rate-limited across multiple executions, e.g. a Twitter API client. I call it once, the limiter takes note of it, the script terminates. Then I call it again, the limiter is aware of the previous call, checks the limiting etc.

How do I manage this need with this library? E.g. can I "serialize" in some way the status of the library's objects, so that I can save them and re-load them back every time I run the script? I wouldn't want to use any backend component, as in the choice of Redis for classdojo/rolling-rate-limiter, but would be happy with something simpler and slower, such as file system or environment variables.

Thanks.

2.0 idea

It might be a good idea to release v2.0 in plain javascript. I think it would help to make the code base easier for the majority to contribute to. I know I've personally been put away because I don't know CoffeeScript's syntax at all.

If you are against it, or just plain prefer CoffeeScript, I completely understand! Just a thought.

decaffeinate/decaffeinate can take care of the heavy lifting.

Thanks for this great library!

Exception in bottleneck.js

After rewriting my code I am facing an exception thrown in Bottleneck.js:240:
https://i.imgur.com/1zn0sVh.png

Exception has occurred: TypeError
TypeError: Cannot read property 'apply' of undefined
at Object.wrapped (C:\Users\h9pe\Documents\brawlstats.io-express\node_modules\bottleneck\lib\Bottleneck.js:240:21)
at Timeout._onTimeout (C:\Users\h9pe\Documents\brawlstats.io-express\node_modules\bottleneck\lib\Bottleneck.js:180:34)
at ontimeout (timers.js:469:11)
at tryOnTimeout (timers.js:304:5)
at Timer.listOnTimeout (timers.js:264:5)

The biggest issue here is that I have no idea what call/line is causing to throw that exception. In the past I noticed that this would be thrown when I call schedulePriority with something else than a function as first parameter. However I don't use schedulePriority at all anymore so I am stuck how to figure out my issue. Any ideas?

The bottleneck type definitions?

https://www.npmjs.com/package/@types/bottleneck

https://www.typescriptlang.org/docs/handbook/declaration-files/introduction.html

First attempt:

// Type definitions for twit 1.15.0S
// Project: https://github.com/SGrondin/bottleneck
// Definitions by: Romel Gomez <https://github.com/romelgomez>
// Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped


declare module 'bottleneck' {

  namespace Bottleneck {
    export interface Fn {
      (...args: any[]): any
    }
  }

  class Bottleneck {

    /**
     * @see https://github.com/SGrondin/bottleneck#constructor
     */
    constructor(maxNb?: number, minTime?: number, highWater?: number, strategy?: number, rejectOnDrop?: boolean);

    /**
     * @see https://github.com/SGrondin/bottleneck#submit
     */
    submit(fn: Bottleneck.Fn, ...args: any[]);

    /**
     * @see https://github.com/SGrondin/bottleneck#schedule
     */
    schedule(fn: Bottleneck.Fn, ...args: any[]);

    /**
     * @see https://github.com/SGrondin/bottleneck#submitpriority
     */
    submitPriority(priority:number, fn: Bottleneck.Fn, ...args: any[]);

    /**
     * @see https://github.com/SGrondin/bottleneck#schedulepriority
     */
    schedulePriority(priority:number, fn: Bottleneck.Fn, ...args: any[]);

  }

  export = Bottleneck;

}

Support limiter.wrap

This issue is a suggestion.

Assuming a function that should be throttled like

var get = function(id) {
   return new Promise(...)
}

It would be nice to be able to do this:

var get = limiter.wrap(function (id) {
  return new Promise(...)
}

Thus, limiter.wrap should return a function. It should pass arguments as received and in the same order to the wrapped function.

Several concurrent but rate limited?

Hi,

Tried implementing this library for the use case of a remote API that has rate limits.

var limiter = new Bottleneck(3, 1000); // 3 per second

This code actually waits a second before firing off request nr. 2.

I'd like it to be able to do 3 requests per second and only wait if the queue is full, that is when run in a loop it would fire the first three requests immediately, and request 4 would go off 1 sec after request 1, etc.

More complex example use case: (remote API has specs: max 3/sec + 200/hr)

var limiter = new Bottleneck(3, 1000); // 3 per second
var hourlyLimit = new Bottleneck(200, 1000*60*60); // 200 per hour
limiter.chain(hourlyLimit);

This currently fires one and then waits an hour for the next call.

Am I missing something here? Thankful for advice.

How to know the queue length

"Bottleneck builds a queue of requests and executes them as soon as possible."
How can I know the queue length? There is no method in "Bottleneck" class to check it.
I think that it's needed because if you issue several "limiter.submit" calls, in the corresponding callback functions is needed to test if it's the end of all requests,

Return queue position

Hello,
is there a way to get the current queue position / approx remaining time for a request in the schedule?

A way to check what is queued or avoiding duplicates

Hi. I use bottleneck to handle API requests that have a rather low limit, therefore my queue can go upwards to 1000-2000 items. The problem I have is that some new tasks will in some circumstances already be in the queue but not completed. Is there a way to check what is in the queue and the arguments, in order to avoid duplicates?

.stopAll() throws an error

Does stopAll work with .schedule()?

I have a bunch of promises lined up and calling stopAll() throws an uncaught exception

NOSCRIPT ReplyError

I keep getting this error with no apparent pattern.

ReplyError: NOSCRIPT No matching script. Please use EVAL.
    at parseError (/Users/gabrielbira/workspace/littledata/queue/node_modules/redis-parser/lib/parser.js:193:12)
    at parseType (/Users/gabrielbira/workspace/littledata/queue/node_modules/redis-parser/lib/parser.js:303:14)
  command: 'EVALSHA',
  args: 
   [ undefined,
     3,
     'b_settings',
     'b_running',
     'b_executing',
     '0',
     '1',
     '1518520777288' ],
  code: 'NOSCRIPT'
}

How to use .on() and .dropped() when using a group.

I had global event listeners before adopting a group limiter.

limiter.on('dropped', (dropped) => {
    console.warn("Dropped IC request", dropped)
})

debug && limiter.on('debug', function (message, data) {
    console.log(message, data)
})
``

Now with a group limiter, how do i use this event listeners?

If i create a listener per limiter, who do I clean them up when the linter is garbage collected from the group? (or are linters not gcd?)

Version 2 wishlist

Want some feature in v2? Comment here.

  • Make "reject on drop" true by default
  • Add support for an options object, but keep the simple argument passing style for the 4 basic arguments (minTime, maxConcurrent, highWater, strategy)
  • Rename all the changeX methods to setX
  • Make the Promise library an option instead of a Prototype setting. Consider maybe making it an option when requiring the library, i.e. var Bottleneck = require('bottleneck')(myPromiseLibrary);.
  • Modern JavaScript (ES2015) rewrite, compiled to ES5 using Babel.
  • Support a timeout (with an error if the time is exceeded) for any job, both callbacks and promises. I don't want this in version 1 because it breaks the backwards compatibility.

More to come..

problem using bottleneck

I use bottleneck in my project and ran for half a month, got problems I had no idea.First I post an image.

plot

In the plot, X is 50 time points, Y is time spent in second,rectangle is total time spent for each request, and diamond is bottleneck wait time.

Rate limit I set is 3s. Obviously, we can see most points of the set are doing good, but 9~10 of them seems unreasonable that is very high in the imagae, Anyone who have same problem?

Question: Is it possible to get a batch of rate limited promises?

I'm wondering if bottleneck is suitable for the following use case.

I have an high throughput API endpoint that receives requests from various sources. I'm using bottleneck.group to rate limit requests per source. Bottleneck does a great job of this.

In doing so, I realise there is a optimisation opportunity. Instead of processing each group request individually, I'd like to process a batch of requests per group. So if the rate limit was 1 request per 10 seconds, in the new design it would be any request that was queued within that 10 second unit.

Is that something I could achieve with bottleneck? Apologies if it's way out of scope for this library.

highWater value 0 is not effective anymore

In earlier releases, setting highWater param to 0, ensured that the queue is not used. Together with OVERFLOW strategy this helped to make sure that submitted requests are all discarded, if there is a running request. In the last version this is not working anymore.

Issue when running on Node 6

Currently using [email protected] and [email protected] and getting the following issue...

~/node_modules/bottleneck/lib/Bottleneck.js:73
      async disconnect(flush = true) {
            ^^^^^^^^^^
SyntaxError: Unexpected identifier
    at Object.exports.runInThisContext (vm.js:76:16)
    at Module._compile (module.js:542:28)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.require (module.js:497:17)
    at require (internal/module.js:20:19)
    at Object.<anonymous> (~/node_modules/bottleneck/lib/index.js:3:20)
    at Object.<anonymous> (~/node_modules/bottleneck/lib/index.js:5:4)

Should I expect this module to export logic that can run in my environment, or should I tell babel to have at it? Here's my .babelrc for the curious:

{
    "presets": [[
        "env", {
          "targets": {
            "node": "6.10"
          }
        }
    ]],
    "plugins": [
      ["transform-object-rest-spread", {
        "useBuiltIns": true
      }]
    ]
}

Stuck after first batch

Hi,
I've got an app that uses Bottleneck. It gets stuck after running the fist batch of requests. Could someone please help? Full description of the problem and app code is posted to stackoverflow

Thanks!

Cannot read property 'apply' of undefined

Hey, any idea why this isn't working correctly?

import Promise from 'bluebird'
import Bottleneck from 'bottleneck'

const limiter = new Bottleneck(1, 100)
const example = (v) => Promise.delay(100).then(() => v)
const exampleLimited = (...args) => limiter.schedule.apply(null, [example, ...args])

Feature: reject promises if they are removed from the queue

Currently if promises are removed because the queue becomes too full they aren't rejected.

var limiter = new Bottleneck(1, 100, 3);

limiter.schedule(test).then(console.log).catch(onRejected);
limiter.schedule(test).then(console.log).catch(onRejected);
limiter.schedule(test).then(console.log).catch(onRejected);
limiter.schedule(test).then(console.log).catch(onRejected);
limiter.schedule(test).then(console.log).catch(onRejected);
limiter.schedule(test).then(console.log).catch(onRejected);

function onRejected() {
    console.log("rejected");
}

function test() {
    console.log("starting");
    return new Promise((resolve) => {
        setTimeout(() => {
            resolve("resolved!");
        }, 1000);
    });
}

Here "rejected" is never printed to the console but 4 of the promises are resolved. Might have a look at submitting a PR with this functionality tomorrow.

More fleshed out errors

When using promises, generic Errors are not super helpful.

Based on the current implementation of schedule for instance, it's not easy to tell if you're dealing with a rejection due to bottleneck's queue filling up or if your promise itself failed.

There are a few ways this could be addressed, I think. One is to set a code property a la node.js errors, which should be sufficient. Another would be to use custom error types.

Thoughts?

Task weight

Working with Facebook's Graph API, I stumbled upon a neat 'batch request' feature which basically does what it says and allows you to bundle multiple requests into one. Unfortunately, I'm not sure how not to go over API limits using Bottleneck, since a batch request is counted as all the requests bundled in it. Is there a way to do this? If not, can we have a new feature where we can attribute weights to tasks ? (e.g. tell this library this task is worth 5 tasks)

How to pause the queue?

I note that there is a documented function to stop (and remove all entries in) the queue, but is there a way of temporarily pausing (and unpausing) execution of a currently executing queue without destroying the remaining items in the queue? To be clear, say I populate a queue with tasks that are presently executing. I would like to temporarily suspend execution of the tasks. Perhaps I'll add new tasks whilst the execution of the queue is suspended. Then I would like to be able to have the queue begin executing tasks again, with respect to any changes to the queue during it's suspended phase.

I imagine it's possible to clone the queue prior to calling the function to stop the queue, then repopulating it, but there's probably a less destructive way that I'm missing :) If not - then some tips on how to accomplish this would be wonderfully helpful. Thanks in advance.

Error "task.apply(...).then is not a function" on node 9.7.1

When I run this on node 9.7.1

const Bottleneck = require("bottleneck");

const limiter = new Bottleneck({
  maxConcurrent: 1,
  minTime: 1000
});

limiter.schedule(() => 'im the call')
  .then((result) => { console.log('handle result', result) });

I get

/[...]/node_modules/bottleneck/lib/Bottleneck.js:376
          return task.apply({}, args).then(function (...args) {
                                      ^

TypeError: task.apply(...).then is not a function
    at Object.wrapped ([...]/node_modules/bottleneck/lib/Bottleneck.js:376:39)
    at Timeout._executing.(anonymous function).timeout.setTimeout [as _onTimeout] ([..]/node_modules/bottleneck/lib/Bottleneck.js:233:32)
    at ontimeout (timers.js:466:11)
    at tryOnTimeout (timers.js:304:5)
    at Timer.listOnTimeout (timers.js:267:5)

Is there a known issue with node 9.7.1?

Typescipt error

Use
export = Bottleneck;
instead of
export default Bottleneck;

I got a error in Typescript 2.5.2
Error: Cannot use 'new' with an expression whose type lacks a call or construct signature.

Typescript unable to compile using import syntax

Probably a question for @alexperovich, when importing bottleneck in a Typescript environment, I get compilation errors.

I'm importing like so:

import Bottleneck from 'bottleneck'

then using it:

const limiter = new Bottleneck(10)

When doing a compilation I get an error like:

TSError: ⨯ Unable to compile TypeScript
... File '.../node_modules/bottleneck/bottleneck.d.ts' is not a module.

My tsconfig.json looks like:

{
  "compilerOptions": {
    "lib": ["dom", "dom.iterable", "scripthost", "es2017"],
    "outDir": "./build/",
    "noImplicitAny": true,
    "noUnusedParameters": true,
    "noUnusedLocals": true,
    "module": "commonjs",
    "target": "es6",
    "sourceMap": true,
    "inlineSources": true
  },
  "include": [
    "./**/*.ts"
  ],
  "exclude": [
    "node_modules",
    "ts-node"
  ]
}

I suspect that I'm doing something wrong here but this is a mature code base and haven't run into this with other packages. Any ideas?

How to cluster

Hello,
I read that this has been requested in the past and after a very long research I wasn't able to find a rate limiting library backed by redis which would also contain all the features I need, so somehow I am still stuck with bottleneck.

Literally every express project I wrote in the past required clustering. It's a pitty that bottleneck doesn't really support it. Every time I scale my application (up- or down) I am forced to adapt the amount of available requests for my limiters for each instance. Since the available requests usually don't change with my number of instances (because I am using external API services) this is very annoying.

Do you have any recommendations what I could do to solve this? Do you maybe intend to implement cluster support in a major update?

Timeout, documentation issue

Hi guys, nice job, thx.

I'm finding the documentation troubling. The gotchas section tells you to read the job options section to read about timeouts, but there is nothing there about them.

Cannot specify custom Promise Library

I'm running into an error trying to substitute Q as my promise library.

In the README it states that

It's also possible to replace the Promise library used:

var Bottleneck = require("bottleneck");
Bottleneck.Promise = myPromiseLibrary;

var limiter = new Bottleneck(maxConcurrent, minTime, highWater, strategy);

However the following code does not create a Q Promise

var Bottleneck = require('bottleneck');
var Q = require('q');
Bottleneck.Promise = Q.Promise;

var limiter = new Bottleneck(1, 1000);

var p = limiter.schedule(function() {
  return Q.resolve(true);
});

console.log(Q.isPromise(p))  // false

Either this is a bug or I'm not setting this up correctly. Please help?

Scheduling promises

Hello,
is it also possible to schedule Promises directly instead of functions which return a promise?

I imagined something like this

requestData() {
  const p = new Promise([..])
  return this.limiter.schedule(p)
}

This way I could avoid writing wrapper functions which add a schedule for all my requestXY() functions.

Retries after a certain time.

Hi, is it possible to do a backoff policy with retry capability? I'm using the 'function wrap' option and was looking for a way to retry a request if the throttling exception occurs, but wait a few seconds before making the follow up request. Thanks

Heads up: .on() no longer returns instance, breaking api

Just a quick heads up that somewhere between 2.1.0 and 2.2.1 the api was changed so that calls to bottleneck.on() no longer return the bottleneck instance and thus are no longer chainable.

Maybe not a biggie, but probably still worth a mention in the release notes since (for us) this was a breaking change and makes bottlenecks behavior now differ from node's events.EventEmitter.on().

Jobs processing order in cluster mode

Hi folks!

I have a problem with the clustering mode. As you can see at the screenshot jobs has been running 1st, 3rd, 2nd, 4th, but not in sequential order.
There are 2 node processes. Job working time is 1 second.

image

Cluster documentation incorrect

The code example for creating a cluster is incorrect. It is missing the "new" keyword on this line:

var cluster = Bottleneck.Cluster(maxConcurrent, minTime, highWater, strategy);

Incompatible with Lolex

First of all, I'm not sure if this is an issue with Bottleneck or Lolex (used by Sinon to stub setTimeout and friends, usually to speed up tests).

If I install Lolex and then use Bottleneck to schedule two asynchronous operations, only the first of them will execute; the other one will stay pending. Internally, Bottleneck uses setTimeout to schedule, so it looks to me like there's a callback that's not getting executed.

I've created a minimal working example. The same thing happens if I use submit with a callback instead of schedule with a Promise, a well as with Bluebird instead of the native Promise object. Also, the version of Node doesn't seem to matter. Finally, I'm on Windows, but I sure hope that's not the cause.

I've tried to dig into the Bottleneck code but wasn't able to come up with anything useful. I did notice that there's only one clearTimeout and it doesn't get called when I run my example.

Thanks!

Add redis support for nodejs cluster support

It would be great if redis could be supported for handling multiple nodejs processes (cluster support). For example, if I'm running 4 nodejs instances with a load balancer in front, the bottleneck instance may not work as expected. Let's say there is a large number of hits coming into the site, and you need to ensure that the bottleneck limiter only executes 1 concurrent request at a time every 200ms. With a nodejs cluster, there will be 4 bottleneck limiter instances, each acting independently. So if the load balancer is spreading the load, you might be getting 4 concurrent requests running, instead of the expected 1 concurrent request. With redis support, they could all communication with each other and ensure that only 1 concurrent request is running at one time. I'm using the bottleneck cluster feature for all my work, so it would need to work with that as well. I hope this is explained well enough. Let me know if you need any more details.

Are keys one time use?

Mon Mar 19 2018 11:50:35 GMT+0200 (EET) - error: (node:45466) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 56): Error: Bottleneck limiter (id: 'group-key-nuifj9q6emDxWdXAp') could not find the Redis key it needs to complete this action (key 'b_group-key-nuifj9q6emDxWdXAp_settings'), was it deleted? Note: This limiter is in a Group, it could have been garbage collected.

I get what is happening. But does this mean a group key is one time use? Why doesn't bottleneck just reinitialise keys after garbage collection?

Provide a no-callback alternative to `submit`?

Thanks for this. It seems to be perfect (for my simple use case) apart from this wart:

If a callback isn't necessary, you must pass null or an empty function instead. It will not work if you forget to do this.

IMO, this is ugly. It pretty much requires that trailing parameter to be commented/explained away. And I'm not sure how to explain it, since the documentation itself doesn't explain it :-) How about adding a method which doesn't require a callback e.g. schedule i.e. allow:

limiter.submit(fn, null);

to be written as:

limiter.schedule(fn);

stopAll() indefinite nbRunning

I am using limiting promises using limiter.schedule(). When running limiter.stopAll() the queue is cleared, but whatever was running is never cleared and seems to just disappear. I am using limiter.nbRunning() to verify.

Am I doing something wrong or is this intended?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.