Giter Club home page Giter Club logo

tarn.js's Introduction

Build Status

Why yet another resource pool?

Tarn is focused on robustness and ability to recover from errors. Tarn has timeouts for all operations that can fail or timeout so that you should never end up with pool full of crap. Tarn has a comprehensive test suite and we are committed to adding tests and fixing all bugs that are found.

Tarn will always remain simple.

Install

npm install tarn

Usage

const { Pool, TimeoutError } = require('tarn');

const pool = new Pool({
  // Function that creates a resource. You can either pass the resource
  // to the callback(error, resource) or return a promise that resolves the resource
  // (but not both) Callback syntax will be deprecated at some point.
  create: cb => {
    cb(null, new SomeResource());
  },

  // Validates a connection before it is used. Return true or false
  // from it. If false is returned, the resource is destroyed and
  // another one is acquired. Should return a Promise if validate is
  // an async function.
  validate: resource => {
    return true;
  },

  // Function that destroys a resource, should return a promise if
  // destroying is an asynchronous operation.
  destroy: someResource => {
    someResource.cleanup();
  },

  // logger function, noop by default
  log: (message, logLevel) => console.log(`${logLevel}: ${message}`)

  // minimum size
  min: 2,

  // maximum size
  max: 10,

  // acquire promises are rejected after this many milliseconds
  // if a resource cannot be acquired
  acquireTimeoutMillis: 30000,

  // create operations are cancelled after this many milliseconds
  // if a resource cannot be acquired
  createTimeoutMillis: 30000,

  // destroy operations are awaited for at most this many milliseconds
  // new resources will be created after this timeout
  destroyTimeoutMillis: 5000,

  // Free resources are destroyed after this many milliseconds.
  // Note that if min > 0, some resources may be kept alive for longer.
  // To reliably destroy all idle resources, set min to 0.
  idleTimeoutMillis: 30000,

  // how often to check for idle resources to destroy
  reapIntervalMillis: 1000,

  // how long to idle after failed create before trying again
  createRetryIntervalMillis: 200,

  // If true, when a create fails, the first pending acquire is
  // rejected with the error. If this is false (the default) then
  // create is retried until acquireTimeoutMillis milliseconds has
  // passed.
  propagateCreateError: false
});

// acquires a resource. The promise is rejected with `tarn.TimeoutError`
// after `acquireTimeoutMillis` if a resource could not be acquired.
const acquire = pool.acquire();

// acquire can be aborted using the abort method.
// If acquire had triggered creating a new resource in the pool
// creation will continue and it is not aborted.
acquire.abort();

// the acquire object has a promise property that gets resolved with
// the acquired resource
try {
  const resource = await acquire.promise;
} catch (err) {
  // if the acquire times out an error of class TimeoutError is thrown
  if (err instanceof TimeoutError) {
    console.log('timeout');
  }
}

// releases the resource.
pool.release(resource);

// returns the number of non-free resources
pool.numUsed();

// returns the number of free resources
pool.numFree();

// how many acquires are waiting for a resource to be released
pool.numPendingAcquires();

// how many asynchronous create calls are running
pool.numPendingCreates();

// waits for all resources to be returned to the pool and destroys them.
// pool cannot be used after this.
await pool.destroy();

// The following examples add synchronous event handlers. For example, to
// allow externally collecting pool behaviour diagnostic data.
// If any of these hooks fail, all errors are caught and warnings are logged.

// resource is acquired from pool
pool.on('acquireRequest', eventId => {});
pool.on('acquireSuccess', (eventId, resource) => {});
pool.on('acquireFail', (eventId, err) => {});

// resource returned to pool
pool.on('release', resource => {});

// resource was created and added to the pool
pool.on('createRequest', eventId => {});
pool.on('createSuccess', (eventId, resource) => {});
pool.on('createFail', (eventId, err) => {});

// resource is destroyed and evicted from pool
// resource may or may not be invalid when destroySuccess / destroyFail is called
pool.on('destroyRequest', (eventId, resource) => {});
pool.on('destroySuccess', (eventId, resource) => {});
pool.on('destroyFail', (eventId, resource, err) => {});

// when internal reaping event clock is activated / deactivated
pool.on('startReaping', () => {});
pool.on('stopReaping', () => {});

// pool is destroyed (after poolDestroySuccess all event handlers are also cleared)
pool.on('poolDestroyRequest', eventId => {});
pool.on('poolDestroySuccess', eventId => {});

// remove single event listener
pool.removeListener(eventName, listener);

// remove all listeners from an event
pool.removeAllListeners(eventName);

Changelog

Master

3.0.1 2020-10-25

  • Added triggering missing createFail event on timeout error - fixes #57

3.0.0 2020-04-18

  • Async validation support, now validation resource function can return a promise #45
  • Fixed releasing abandoned resource after creation when create timeout #48

Released as major version, because async validation support did require lots of internal changes, which may cause subtle difference in behavior.

2.0.0 2019-06-02

  • Accidentally published breaking changes in 1.2.0. Unpublished it and published again with correct version number 2.0.0 #33

1.2.0 2019-06-02 (UNPUBLISHED)

  • Passing unknown options throws an error #19 #32
  • Diagnostic event handlers to allow monitoring pool behaviour #14 #23
  • Dropped node 6 support #25 #28
  • pool.destroy() now always waits for all pending destroys to finish before resolving #29

1.1.5 2019-04-06

  • Added changelog #22
  • Handle opt.destroy() being a promise with destroyTimeout #16
  • Explicitly silence bluebird warnings #17
  • Add strict typings via TypeScript #10

tarn.js's People

Contributors

koskimas avatar elhigu avatar alubbe avatar kibertoad avatar mjomble avatar beneinwechter avatar txase avatar dhensby avatar capaj avatar jgautier avatar neamar avatar methuselah96 avatar tgriesser avatar gannons avatar

Stargazers

Jogchum Koerts avatar Toni Villena avatar j avatar Adam Brent avatar Anthony Cyrille avatar M̴̧̡̡̢͎̬͖̬̠̭̱̝̩̪̝̠͎͈̪̰͕͓̼̻͎̮̜̫̬͍̱̭̠̟̖͉͈̺͙͉͒́͂̾̅̚̚͜͜͜Ę̷̡̡̧̨͎͕̟̪̰͍̬̮̥̝̖̯͕̳̳̜̭͖̯͙̜̰̤̼̠̼̪̝̐̔̍̋͜͝R̶̘̹̠̻̺̟̞̦͗̓̓̐̄̀͊̏̔͂͑̉͆͘̕͝T̴̢̨̛̩̮̖̺̱͙̳̪̠̼͐͌̇̀̓́͂̎̔͗͂̄̈́̆́̏̉̀͑͗͐̈́̋̇͌̿̅͒̓͒͒̂͋͘͝͝ avatar Philipp Burckhardt avatar Giovanny Gutiérrez avatar footearth avatar Cesar Marinho avatar Jerhone avatar Magnus Burton avatar luckyhu avatar Antoine Coulon avatar Giorgos Ntemiris avatar Hiruthik J avatar LeonGu avatar ◤◢◤◢◤◢◤◢ avatar Kasun Vithanage avatar Sung Jeon avatar Yigit avatar anh avatar Abhishek Kumar Singh avatar Abdul Rehman Malik avatar strobelpierre avatar Sandalots avatar C. T. Lin avatar Artem Bey avatar Huang Youchuan avatar 谢远亮 avatar  avatar MJC avatar Andres Saa avatar zhennann avatar Adriel Alberto avatar Lam Ngoc Khuong avatar Hector Zarco  avatar Thinking80s avatar Ciro Lo Sapio avatar 黄剑 avatar huuya avatar Ediz avatar Frank avatar Muhammad Kamran avatar Ronak Badhe avatar João Rafael Soares avatar TJ avatar  avatar Francis Brito avatar Adarsh Madrecha avatar DBez avatar Alfie Su avatar  avatar Igal Klebanov avatar Jorge Luis avatar JC avatar 设计匠 avatar longlong avatar Raj Kadam avatar Jose Peleteiro avatar  avatar Beknur avatar Kuba Kurmanaliev avatar Sakthivel Murugasamy avatar  avatar Ivan Portilla avatar zhanfang avatar TuNA avatar Tu Nguyen avatar whincwu avatar Guilherme avatar Tim Mikeladze avatar Alexander Semyenov avatar Maksym Mariash avatar Sergey avatar tan-nguyen-tpv-mti avatar Alden Merlin avatar 朱小乔 avatar wahome avatar  avatar Ryan Wild avatar Leo Peng avatar yuhao avatar Kusal KC avatar Mohammad Hossein Moradi avatar Radifan avatar Daniel Loureiro avatar  avatar Alex avatar  avatar Sathit Seethaphon avatar Mert avatar Ali avatar Kıraç Armağan Önal avatar Eric Xu avatar  avatar Pedro Gryzinsky avatar Matt Lewis avatar Shedrach Okonofua avatar Stephen Demjanenko avatar

Watchers

 avatar Pekka Virtanen avatar  avatar James Cloos avatar Anssi Kuutti avatar  avatar Mauricio Navarro Miranda avatar Pasi Kovanen avatar  avatar Aparna Rao avatar Jimmy Huang avatar  avatar  avatar Rodrigo Correa Barro avatar

tarn.js's Issues

consider adding some way to instrument time-to-acquire?

I'd like to be able to see metrics on how long it takes to acquire an object from the pool, which would help us tune our pool configuration over time. If you're open to this I'd be happy to take a stab at a PR. I'm not sure the best implementation strategy, but perhaps an event emitter. Something like the following would be great:

pool.on("acquire", (ms) => {
  metrics.gauge("db_connection_pool", ms);
})

// or, to avoid making pool itself an emitter:

pool.metrics().on("acquire", (ms) => { ... });

Dropping support for node v6 and using async/await

With v6 out of LTS and v12 introducing async stacktraces, would you at all be interested in refactoring the codebase by replacing all .then/.catch with async/await and try/catch for the performance & debugging improvements? If you are, I'll be happy to help out with a few PRs

feature request: asynchronous validate function

I am using knex 0.15.2 along with tarn 1.1.4.
Have been getting the "This socket has been ended by the other party." recently when trying to use a connection that has been closed or redirected by the server.
We have found that we may be able to avoid this error by pinging the connection in the validation function.
Problem we're having is that the validation function is synchronous and there is no way for it to wait for the ping promise to resolve.
Can we have the validate function resolve async functions?

Feature Request: Ability to configure free resource selection algorithm

Background
I've spent the last couple of days debugging a production issue where we were seeing our application leaking idle connections to mysql. Our application uses both typeorm and knex to fetch data from mysql but we discovered that the leaks were only coming from knex. From there I spent a bunch of time reading over tarn and mysql source code to try to understand what was going on. As it turns out the native mysql driver does not clean up the underlying socket when connection.end() is invoked (which knex does when tarn successfully destroys the resource). This gave me an answer to why I was seeing a lot of more open tcp connections that I would expect in my application container when running netstat | grep mysql. However, this didn't explain why typeorm did not suffer from this leakage problem. typeorm relies on the native mysql library connection pool for its pooling and that library treats it's free resource array as a queue while tarn treats it as a stack.

Request
The choice of data structure has pretty significant impact on how the pool performs relative to its underlying resources. A queue is likely going to result in significantly fewer create & destroy calls as it will more fairly load balance across all the free resources but will lead to the pool being pegged at the max setting. The stack will lead to fewer resources being used much more frequently and optimize for the fewest created resources needed. In my case, I would prefer to use the min/max settings to control the number of connections to mysql and optimize for needing to connect less often as a) the driver doesn't clean up well and b) connecting to the db puts additional non-zero load on it.

Would you consider allowing the Pool class to accept a configuration option that allows a user to adjust the internal selection algorithm? For example:

const pool = new Pool ({
  min: 0,
  max: 10,
  freeResourceSelectionAlgorithm: 'lifo', // 'lifo' | 'fifo' 
  ...
})

I've made a stackblitz to help mysql visualize what is going on it the pool. It may help you as well. In our particular case, every destroyed resource caused a leaked socket to mysql.

If this approach is agreeable, I would happily put up a PR! Thanks!

It seems tarn breaks create-react-app compilation

Hello. As far as I know a project based on create-react-app needs its dependencies compiled to ES5 but it seems that tarn package only provides ES6 code. Could your provide any help?

Here is the error I get:

Failed to minify the code from this file:

 	./node_modules/tarn/lib/utils.js:6

Read more here: http://bit.ly/2tRViJ9

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `react-scripts-ts build`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\yagol\AppData\Roaming\npm-cache\_logs\2018-05-27T18_35_40_031Z-debug.log

How does pool min resource amount behave?

When pool is created does tarn immediately warm up and try to acquire those min number of resources to be or does min: 2 only have an effect on how many resources are left to pool when cleaning up?

More Logging when waiting for pool member

I would love to see an enhancement to print logs when a query begins waiting for a pool member to be available. Optionally we could print out what queries are currently running to add even more visibility.

Feature request: Ability to disable reaping

pool.check()-ing on an interval / timeout, in the context of AWS Lambda, doesn't work as the run time gets frozen, leading to issues like knex/knex#3636.

A possible solution to the above may be having the ability to disable reaping (e.g. reapIntervalMillis: false, or disableReaping: true) and do manual pool.check()-ing.

destroy() doesn't wait that all resources are actually destroyed, before returning

Also noticed this bug, while writing hooks. I'll add PR to fix this shortly. I suppose we should keep track of all pending destroys like we do for acquires and creates.

    it('should wait for all resource destroys to finish before returning', () => {
      let destroyDelay = 200;
      pool = new Pool({
        create: () => {
          return Promise.resolve({});
        },
        destroy(res) {
          destroyDelay -= 50;
          return Promise.delay(destroyDelay).then(() => {
            res.destroyed = true;
          });
        },
        reapIntervalMillis: 10,
        idleTimeoutMillis: 1,
        min: 0,
        max: 10
      });

      return Promise.all([
        pool.acquire().promise,
        pool.acquire().promise,
        pool.acquire().promise,
        pool.acquire().promise
      ])
        .then(resources => {
          pool.release(resources[0]);
          pool.release(resources[1]);
          pool.release(resources[2]);
          pool.release(resources[3]);

          // reaping should have started destroying these resources already
          return Promise.delay(30).then(() => resources);
        })
        .then(resources => {
          // pool destroy should wait that all destroys are completed
          return pool.destroy().then(() => resources);
        })
        .then(resources => {
          expect(resources[0].destroyed).to.be.ok();
          expect(resources[1].destroyed).to.be.ok();
          expect(resources[2].destroyed).to.be.ok();
          expect(resources[3].destroyed).to.be.ok();
        });
    });
  });

Update release

You have 3 commits to master since latest tag. Please, update the repo.

npm ERR! notarget No matching version found for [email protected]

npm ERR! code ETARGET
npm ERR! notarget No matching version found for [email protected]
npm ERR! notarget In most cases you or one of your dependencies are requesting
npm ERR! notarget a package version that doesn't exist.
npm ERR! notarget
npm ERR! notarget It was specified as a dependency of 'content-type-builder'
npm ERR! notarget

npm ERR! A complete log of this run can be found in:

Exemple use cases

Hi there,

As a total noob in the node pooling world, it would be great to see some simple exemple use cases,

Either in a folder on this repo, or online on codesandbox.io / stackblitz.com / codepen.io like websites

Thanks

`@types/node` needs to be declared as a dependency for this package to work with typescript and yarn v2

yarn v2 is a little stricter about dependencies than yarn v1 or npm. If a package abc imports another package xyz without having an explicit dependency on it, yarn v2 will not resolve the import - which is to say, this package breaks when used with typescript and yarn v2 (tsc complains about not being able to find a type definition file for @types/node). Please consider adding a dependency on @types/node for the yarn v2 users out there - thank you!

Consider adding some hook for refreshing resource after idle timeout is reached

Currently if min resources is > 0 and they are idle for a long time in the pool the resource might get stale without notification (at least postgresql does that in some cases).

This behaviour basically renders having minimum amount of connections in config useless, because having stale resources waiting in pool is not useful.

We could add one more hook to the config which will be called for resource when idle timeout is reached and resource cannot be destroyed and freed because of pool min config value.

Should we throw an error if non existent parameters are passed to the pool creation

I saw bunch of deprecated code in knex, that is trying to check and give warnings if invalid parameters are passed to tarn (checked parameter list actually didn't match with tarn...).

To me it would be more reasonable to do parameter validation by tarn instead of trying to keep up with correct attributes in knex side. Probably knex had that code, because of its old habit of changing pool on every release.

Downside of this is that throwing an error on invalid parameter would be a breaking change. I could live with that though.

Feature request: be able to change the Pool options "max" and "min" at runtime

I would like to use this lib with a Postgre on AWS aurora serverless. Problem is that aurora serverless scales up at runtime so I need to tweak the max connections value to be able to utilize serverless as it scales up. Is this possible with tarn.js?

Looking at the code it seems like theoretically it could work like this:

const p = new Pool({max: 90}) // one ACU
// when scaling occurs
p.max = 180 // two ACUs
p.check()

is this assumption correct?

Tarn: unsupported option opt.xxx

When I extend a new Pool from the Tarn, and putting the other keys on the TarnPoolOptions, to implement my requirements, likes the below.

{
    idleTimeoutMillis: 3000,
    acquireTimeoutMillis: 3000,
    someMyOwnKey: 'xxx'
}

And I transmit the options to Tarn,then it will give me the Error:

Tarn: unsupported option opt.xxx

This problem is from the 1.2.0 version because of these codes.

const allowedKeys = {
    create: true,
    validate: true,
    destroy: true,
    log: true,
    min: true,
    max: true,
    acquireTimeoutMillis: true,
    createTimeoutMillis: true,
    destroyTimeoutMillis: true,
    idleTimeoutMillis: true,
    reapIntervalMillis: true,
    createRetryIntervalMillis: true,
    propagateCreateError: true
};
for (let key of Object.keys(opt)) {
    if (!allowedKeys[key]) {
        throw new Error(`Tarn: unsupported option opt.${key}`);
    }
}

I think it's a Breaking Change but you upgrade the minor version.

Typing info for log function misses 2nd argument

The optional log function is called with 2 arguments, the message and the level (although the level is always 'warn'). The typing information only declares the 1st argument:

log?: (msg: string) => any;

Therefore, the example pool from README.md won't compile in TypeScript:

    log: (message, logLevel) => console.log(`${logLevel}: ${message}`),

destroy a specific resource ?

When a resource got an unknown error and is not usable anymore, it would make sense to be able to call pool.destroy(resource) on it.

Increase timeout request

I have placed in my configuration in all possible ways modifiers of the timeout but it never changes from 1500ms.
"RequestError: Timeout: Request failed to complete in 15000ms". I always get at the timeout error, which exceeds 1500ms. Thanks for the help.

       dbSettings: {
                user: 'abcd',
                password: 'abcd',
                server: 'abcd',
                database: 'abcd',
                setTimeout: 900000,
                connectionTimeout: 900000,
                requesTimeout: 900000,
                pool:{
                    idleTimeoutMillis: 90000
                    },
                options:{
                    encrypt: false, 
                    trustServerCertificate: true,
                    }
               }

Make it possible to limit the times a resource could be used

I'm trying to use tarn.js with puppeteer to create a pool of workers that create screenshots out of coming URLs. Chrome is leaking resources and after a while workers become unstable. One solution would be to limit the times a worker can be used.

It would be a great addition to tarn.js if we can add a property like maxAcquireCount. After a worker has been used N times, it will be destroyed and recreated.

Remove Travis config

Package-lock is useful to be able to "timetravel" to see what kind of librarysets were used with older working builds, but to make sure that this package works correctly for the end user the latest available packages should be used for testing.

sindresorhus/ama#479 (comment)

Curious behavior with idle timeouts (Postgres)

We're using Knex 0.95.15 with tarn 3.0.2 and noticed some of our CI runs are hanging for many seconds past mocha's exit. After adding wtfnode to investigate we see the following open handles and timers running past the end of our test suite:

- Timers:
1909  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1910  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1911  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1912  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1913  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1914  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1915  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1916  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1917  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1918  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1919  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1920  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1921  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1922  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1923  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1924  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1925  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1926  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1927  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324
1928  - (10000 ~ 10 s) (anonymous) @ /node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:324

There's very likely something I missed here, but this doesn't feel right since the Tarn idleTimeoutMillis defaults to 30000ms. Even when I set it (through Knex) lower, this issue still occurs with 10 second timeouts and clearly seems to originate from pg-pool and not Tarn.

There's a section in Knex' PoolConfig types that explicitly references Tarn configs in a way that appears separate from the rest of the PoolConfig interface, which makes me curious.

My expectation was that changing idleTimeoutMillis in our Test environment would impact these timeouts. Interestingly, node-postgres recently added a (false defaulted) config to allowExitOnIdle which feels like it could be useful here but I can't set it through Knex/Tarn since it's not allowed by the PoolConfig interface.

Despite looking into this for quite a while now I can't quite seem to find where Knex/Tarn interact with node-postgres and where these configs would be clashing. I'd be grateful if anyone has pointers, and hopefully someone encountering similar issues can benefit from my research. 🙃

Prettier does prettify also generated js code

I noticed during release that my locally built code did differ after running npm run build and comparing it to code that was read from github.

Looks like prettier prettyfies also code complied from typescript.

Prettier should ignore /lib directory.

Error: aborted

I started seeing this problem after upgrading from knex 0.21.13 to 0.95.4.

In using the destroy() method of knex to destroy connection pools at the end of jest tests (to avoid hanging tests), I am intermittently getting the following error:

  ● Test suite failed to run

    Error: aborted

      94 |     async destroy() {
      95 |         try {
    > 96 |             return await Promise.all([
         |                    ^
      97 |                 this.aurora,
      98 |                 this.diligence,
      99 |                 this.core,

      at PendingOperation.abort (../RCG-Builders/node_modules/knex/node_modules/tarn/dist/PendingOperation.js:25:21)
      at ../RCG-Builders/node_modules/knex/node_modules/tarn/dist/Pool.js:208:25
          at Array.map (<anonymous>)
      at ../RCG-Builders/node_modules/knex/node_modules/tarn/dist/Pool.js:207:53
          at runMicrotasks (<anonymous>)
      at Client_MSSQL.destroy (../RCG-Builders/node_modules/knex/lib/client.js:321:9)
          at async Promise.all (index 1)
      at DataContext.destroy (../RCG-Builders/dist/DataContext.js:96:20)
      at Object.<anonymous> (integration/investments.integration.test.ts:643:3)

(node:11314) UnhandledPromiseRejectionWarning: Error: aborted
(Use `node --trace-warnings ...` to show where the warning was created)
(node:11314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1904)
(node:11314) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
(node:11314) UnhandledPromiseRejectionWarning: Error: aborted
(node:11314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1905)
(node:11314) UnhandledPromiseRejectionWarning: Error: aborted
(node:11314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1906)
(node:11314) UnhandledPromiseRejectionWarning: Error: aborted
(node:11314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1907)
(node:11314) UnhandledPromiseRejectionWarning: Error: aborted
(node:11314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1908)
(node:11314) UnhandledPromiseRejectionWarning: Error: aborted
(node:11314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1909)
(node:11314) UnhandledPromiseRejectionWarning: Error: aborted
(node:11314) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1910)

My code that is erroring is

  async destroy() {
    try {
     await Promise.all(
        [
          this.aurora,
          this.diligence,
          this.core,
          this.dbaUse,
          this.risk,
        ].map((conn) => conn.destroy())
      );
    } catch {}
  }

and this is called at the end of each test suite

afterAll(async () => {
  await builder.destroy();
});

To my knowledge, the way I am awaiting inside the try block means that it should catch both errors that occur before promises are created and returned and errors due to rejected promise. However, this is still producing unhandled promise rejections, which makes no sense to me.

I see that the source code of tarn is using setInterval, so I suspect that this is causing the promise rejections to be unhandled, since tarn's creation of the promises may be deferred onto the event loop and not catchable by me. So, I don't know how to workaround this problem.

Furthermore, I don't see any problem with how my code is trying to destroy a list of connection pools (Knex<any, unknown[]>). Each is a separate connection with its own pool, so destroying one should have no effect on destroying another. I have also tried awaiting each destroy() in sequence, writing out the call to destroy for each connection, but I still get the same unhandled rejection error causing false negatives on my tests. I am running the tests sequentially (using jest --runInBand) not in parallel, so there shouldn't be any issues related to overlapping test runs.

Allow idleTimeoutMillis to be 0

Hello,

I am using Knex.js to handle my database operations. Behind hood it uses tarn.js to handle Pool.

Trying to set idleTimeoutMillis to 0 I noticed that I get

Error: Tarn: invalid opt.idleTimeoutMillis 0

Going trough code I saw that function

export function checkRequiredTime(time: number) {
  return typeof time === 'number' && time === Math.round(time) && time > 0;
}

actually expects time to be strictly larger than 0. Should 0 be allowed here? node-postgres package allows 0 (and that is exactly what I need). If the answer is yes I could create a PR for this :)

TypeError: this._stopReaping is not a function

this._stopReaping();
         ^
TypeError: this._stopReaping is not a function
    at process.destroy (/Users/test/node_modules/tarn/lib/Pool.js:166:10)
    at emitNone (events.js:86:13)
    at process.emit (events.js:188:7)
    at Signal.wrap.onsignal (internal/process.js:215:44)

pool.destory() is called via:

process.on('SIGTERM', pool.destroy);

What's a better way to clean up the pool when there is a SIGTERM os signal?

A promise was created in a handler but was not returned from it

Hello!

I'm getting plenty of this warning (using knex).

(node:7714) Warning: a promise was created in a handler at node_modules/tarn/lib/Pool.js:297:24 but was not returned from it, see http://goo.gl/rRqMUw
    at new Promise (node_modules/bluebird/js/release/promise.js:79:10)

Is this a false warning or something that could potentially be fixed?

`createFail` event handler not called when resource creation times out

I've been debugging an issue in production and have been trying to log out tarn lifecycle events. After reading a confusing set of logs and diving deep into this library, it seems that we're only firing the createFail event when the resource creator rejects, but not when the PendingOperation rejects with a timeout. Why is this the case?

For reference, here's the code I'm talking about:

this._executeEventHandlers('createFail', eventId, err);

Suggested alternative:

      .catch(err => {
        this._executeEventHandlers('createFail', eventId, err);

        if (pendingCreate.isRejected) {
          return null;
        }

        remove(this.pendingCreates, pendingCreate);

        // Not returned on purpose.
        pendingCreate.reject(err);
        return null;
      });

It seems to me that since this creation has failed (due to timeout or otherwise), we should notify.

Default min and max?

In the README it looks like the default min & max is 2 & 10 but it doesn't look like any defaults are set in Pool.js:

tarn.js/lib/Pool.js

Lines 79 to 80 in bf18700

this.min = opt.min;
this.max = opt.max;

Are the defaults set some other way?

\node_modules\tarn\dist\Pool.js:65 throw new Error(`Tarn: unsupported option opt.${key}`);

\node_modules\tarn\dist\Pool.js:65
throw new Error(Tarn: unsupported option opt.${key});

This is caused by lines 47 through 66 of the Pool.js file.
const allowedKeys = {
create: true,
validate: true,
destroy: true,
log: true,
min: true,
max: true,
acquireTimeoutMillis: true,
createTimeoutMillis: true,
destroyTimeoutMillis: true,
idleTimeoutMillis: true,
reapIntervalMillis: true,
createRetryIntervalMillis: true,
propagateCreateError: true
};
for (const key of Object.keys(opt)) {
if (!allowedKeys[key]) {
throw new Error(Tarn: unsupported option opt.${key});
}
}

to be able to use mssql npm package, I finally realized all I needed to do was comment out this section. Can you please add a key that would work with the MSSQL npm package?

acquire() fails immediately with TimeoutError if pool in a weird state

Hi! I think I'm seeing a problem where the promise returned by acquire() is immediately rejected with TimeoutError (much sooner than acquireTimeoutMillis). I'm still narrowing down the exact repro steps, but it's something like:

  1. Create a new pool.
  2. Acquire all the resources allowed by max and hold them.
  3. Call acquire() more times and wait for these to time out.
  4. Release the resources.
  5. Call acquire() again. This should succeed but it doesn't. The returned promise is immediately rejected with TimeoutError.

I've tried adding a bunch of debugging code and right before step 5 I believe pendingCreates, pendingAcquires, and pendingDestroys are all empty lists. free contains resources. The number of resources doesn't seem to matter. It seems like this happens if it's 1 or max. So it seems like there must be a leftover TimeoutError on the Resource in the free list. But I'm having a hard time wrapping my head around how all this works so I might be wrong about all of this.

I'm testing this via Objection and knex, so I also can't eliminate those as the source of the problem I'm seeing.

I'll try to write some repro code for this soon.

Operation timed out for an unknown reason

I have a web app that keeps crashing with this error:

  • 2020-01-23T08:07:36.485-05:00 [APP/PROC/WEB/0] [ERR] (node:134) UnhandledPromiseRejectionWarning: Error: operation timed out for an unknown reason
  • 2020-01-23T08:07:36.485-05:00 [APP/PROC/WEB/0] [ERR] at /home/vcap/app/node_modules/tarn/lib/PendingOperation.js:16:27
  • 2020-01-23T08:07:36.485-05:00 [APP/PROC/WEB/0] [ERR] (node:134) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 140)

Searched far and wide but can't pinpoint the exact database call that's triggering this. Attempts to resolve have included specifying Node version 13.5.0, increasing max pool size, updating all npm packages.

Any help is appreciated.

Destroy should expect a Promise

Docs says:

  // function that destroys a resource. This is always synchronous
  // as nothing waits for the return value.
  destroy: (someResource) => {
    someResource.cleanup();
  },

IMO, this could be a wrong assumption: what if the whole cleanup is asynchronous? For example, amqplib.close returns a Promise.

Currently, this means that cleaning-up is fire&forget.

Is is really hard to support Promise?

Thank you in advance.

Support exponential backoff for create retries

When create fails, it would be nice if we could exponentially increase the retry interval, and reset it to the original value once a resource was sucessfully created (or linearily decrease with each sucessfully created resource).

I have quickly implemented this in a custom subclass like this:

class ExponentialBackoffLinearDecreasePool<T> extends Pool<T> {
  constructor(opt: PoolOptions<T>) {
    super(opt);
    const baseCreateRetryIntervalMillis = this.createRetryIntervalMillis;
    this.on('createFail', () => {
      if (this.createRetryIntervalMillis === this.createTimeoutMillis) return;
      this.createRetryIntervalMillis = Math.min(this.createRetryIntervalMillis * 2, this.createTimeoutMillis);
      this.log(`Increased createRetryIntervalMillis to ${this.createRetryIntervalMillis}`, 'debug' as any);
    });
    this.on('createSuccess', () => {
      if (this.createRetryIntervalMillis === baseCreateRetryIntervalMillis) return;
      this.createRetryIntervalMillis = Math.max(this.createRetryIntervalMillis - baseCreateRetryIntervalMillis, baseCreateRetryIntervalMillis);
      this.log(`Decreased createRetryIntervalMillis to ${this.createRetryIntervalMillis}`, 'debug' as any);
    });
  }
}

Perhaps something like this could be integrated?

Event Emitter methods missing (.once) and (.off)

the pool exposes .on() which makes the user think its an Event Emitter, but the pool adds additional complexity by having the event emitter as a field. If this approach were to stay, the pool should also expose .once, .off, etc.

It is even marked protected in the type annotation, which doesn't affect me but may affect developers who use tooling for that.

If waiting for the pool to release some resources, it is possible to do the following

var { once } = require('events');
(async () => {
  await once(pool.emitter, 'release');
  console.log('i can send more requests now');
})();

it would be more intuitive to do await once(pool, 'release') but internally this calls "once" which is not copied onto the pool class - it seems like this is something which is definitely useful for a pool so if there is another way to do it, I would be very interested to know.

Consider a new config option queryTimeoutMillis?

My knex hang infinitely in production. I am still not sure whether it is stucked during query(still working on it, turn on DEBUG=knex:* and waiting...). The production network environment is so bad and hard to simulate. It is possible that a connection will dead without any response in the production network (maybe a tcp proxy works wiredly, hold the connection for knex, but never make a new connection to the db after the old one is broken).

I see there is already a timeout function for builder but it would be better if tarn can provide a global timeout option for query.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.