Giter Club home page Giter Club logo

poolboy's Introduction

Poolboy - A hunky Erlang worker pool factory

Build Status

Support via Gratipay

Poolboy is a lightweight, generic pooling library for Erlang with a focus on simplicity, performance, and rock-solid disaster recovery.

Usage

1> Worker = poolboy:checkout(PoolName).
<0.9001.0>
2> gen_server:call(Worker, Request).
ok
3> poolboy:checkin(PoolName, Worker).
ok

Example

This is an example application showcasing database connection pools using Poolboy and epgsql.

example.app

{application, example, [
    {description, "An example application"},
    {vsn, "0.1"},
    {applications, [kernel, stdlib, sasl, crypto, ssl]},
    {modules, [example, example_worker]},
    {registered, [example]},
    {mod, {example, []}},
    {env, [
        {pools, [
            {pool1, [
                {size, 10},
                {max_overflow, 20}
			], [
                {hostname, "127.0.0.1"},
                {database, "db1"},
                {username, "db1"},
                {password, "abc123"}
            ]},
            {pool2, [
                {size, 5},
                {max_overflow, 10}
			], [
                {hostname, "127.0.0.1"},
                {database, "db2"},
                {username, "db2"},
                {password, "abc123"}
            ]}
        ]}
    ]}
]}.

example.erl

-module(example).
-behaviour(application).
-behaviour(supervisor).

-export([start/0, stop/0, squery/2, equery/3]).
-export([start/2, stop/1]).
-export([init/1]).

start() ->
    application:start(?MODULE).

stop() ->
    application:stop(?MODULE).

start(_Type, _Args) ->
    supervisor:start_link({local, example_sup}, ?MODULE, []).

stop(_State) ->
    ok.

init([]) ->
    {ok, Pools} = application:get_env(example, pools),
    PoolSpecs = lists:map(fun({Name, SizeArgs, WorkerArgs}) ->
        PoolArgs = [{name, {local, Name}},
            		{worker_module, example_worker}] ++ SizeArgs,
        poolboy:child_spec(Name, PoolArgs, WorkerArgs)
    end, Pools),
    {ok, {{one_for_one, 10, 10}, PoolSpecs}}.

squery(PoolName, Sql) ->
    poolboy:transaction(PoolName, fun(Worker) ->
        gen_server:call(Worker, {squery, Sql})
    end).

equery(PoolName, Stmt, Params) ->
    poolboy:transaction(PoolName, fun(Worker) ->
        gen_server:call(Worker, {equery, Stmt, Params})
    end).

example_worker.erl

-module(example_worker).
-behaviour(gen_server).
-behaviour(poolboy_worker).

-export([start_link/1]).
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,
         code_change/3]).

-record(state, {conn}).

start_link(Args) ->
    gen_server:start_link(?MODULE, Args, []).

init(Args) ->
    process_flag(trap_exit, true),
    Hostname = proplists:get_value(hostname, Args),
    Database = proplists:get_value(database, Args),
    Username = proplists:get_value(username, Args),
    Password = proplists:get_value(password, Args),
    {ok, Conn} = epgsql:connect(Hostname, Username, Password, [
        {database, Database}
    ]),
    {ok, #state{conn=Conn}}.

handle_call({squery, Sql}, _From, #state{conn=Conn}=State) ->
    {reply, epgsql:squery(Conn, Sql), State};
handle_call({equery, Stmt, Params}, _From, #state{conn=Conn}=State) ->
    {reply, epgsql:equery(Conn, Stmt, Params), State};
handle_call(_Request, _From, State) ->
    {reply, ok, State}.

handle_cast(_Msg, State) ->
    {noreply, State}.

handle_info(_Info, State) ->
    {noreply, State}.

terminate(_Reason, #state{conn=Conn}) ->
    ok = epgsql:close(Conn),
    ok.

code_change(_OldVsn, State, _Extra) ->
    {ok, State}.

Options

  • name: the pool name
  • worker_module: the module that represents the workers
  • size: maximum pool size
  • max_overflow: maximum number of workers created if pool is empty
  • strategy: lifo or fifo, determines whether checked in workers should be placed first or last in the line of available workers. So, lifo operates like a traditional stack; fifo like a queue. Default is lifo.

Authors

License

Poolboy is available in the public domain (see UNLICENSE). Poolboy is also optionally available under the ISC license (see LICENSE), meant especially for jurisdictions that do not recognize public domain works.

poolboy's People

Contributors

ahovgaard avatar alexchowle avatar bitdeli-chef avatar bullno1 avatar d2fn avatar davidw avatar ddosia avatar dmitriid avatar drobakowski avatar ericmj avatar fishcakez avatar getong avatar hlieberman avatar jamesgolick avatar jaredmorrow avatar jyzr avatar kiela avatar lefan avatar ppikula avatar quiquepaz avatar sasa1977 avatar skeltoac avatar tsloughter avatar vagabond avatar waldyrious avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

poolboy's Issues

incorrect workers termination

Hello, I have some troubles with workers termination. I successfully implemented worker, establish connection to my DB, all works fine.
Problem starts when you try to do application:terminate(myapp).
Terminate function in worker process begins to execute, but not ends. You can try to simulate it with such code:

terminate(_Reason, #state{conn=Conn}) ->
    io:format("BEFORE DISCONNECT ~n"),
    % in all top supervisor i got 5000 timeout value
    timer:sleep(2000),
    io:format("AFTER DISCONNECT ~n"),
    ok. 

1> application:stop(myapp).
BEFORE DISCONNECT
ok

Sorry, i am Erlang novice, but afaik from supervisor docs:

Important note on simple-one-for-one supervisors: The dynamically created child processes of a simple-one-for-one supervisor are not explic‐
itly killed, regardless of shutdown strategy, but are expected to terminate when the supervisor does (that is, when an exit signal from the
parent process is received).

Maybe this is the reason of such behaviour?
If it helps, my worker spawn new process, which talk with DB, and that new process begins monitor his parent. During applicaion:stop process
it got such message: {'DOWN',#Ref<0.0.0.146>,process,<0.102.0>,killed}.

Pool stops working after poolboy:transaction times out.

I created a sample project, which reproduces and error in poolboy https://github.com/eshubin/poolboy_test.

What happens is that poolboy:transaction function terminates via timeout
https://github.com/eshubin/poolboy_test/blob/ver1/src/poolboy_test.erl#L35,
https://github.com/eshubin/poolboy_test/blob/ver1/src/poolboy_test.erl#L63
and then poolboy:status always returns {full,0,0,1} https://github.com/eshubin/poolboy_test/blob/ver1/src/poolboy_test.erl#L64
and all further calls of poolboy:transaction also time out https://github.com/eshubin/poolboy_test/blob/ver1/src/poolboy_test.erl#L65.
Actually code in fun is not executed at all.
All tests could be launched with "rebar eunit" in top level directory.

Is it possible to restore pool's functionality after such timeout?

Support a maximum connection age

I did not find this in the documentation, but it would be great if we could specify a maximum connection age so that connections are terminated and new ones are started after some time.

Poolboy should notify workers when they are returned to the pool

When working with a database such as postgresql, the workers owning the connection need to be notified when returned to the pool in order to give the worker the opportunity to reset the state of the connection (rollback pending transactions for example).

Trying to understand max_overflow

Hi,

I am trying to whether poolboy can dynamically increase the pool size until it reaches the size.

Allocating resources in the beginning of the world seems to be wasteful.

Regards,
Thanks

Possible race-condition/worker leaks in wait_valid

wait_valid(StartTime, Timeout) src/poolboy.erl#L260 gets the same Timeout value as gen_server:call(..., Timeout) src/poolboy.erl#L35
So, in case, when
(Waited div 1000) is very close to Timeout
OR
Poolboy process has large message queue, the race condition between gen_server:call(..{checkout, ... and add_waiting() + wait_valid() is possible:

gen_server:call(Pool, {checkout, ... already raise timeout, but wait_valid() returns true
for example (pseudocode, HH:MM:SS - action's timestamp)

% 18:00:00
gen_server:call(Pool, {checkout, ..., 5000) 

% 18:00:04 (not 18:00:00, because of large mailbox)
handle_call({checkout, ...) when state == full
   add_waiting(From, Timeout, State#state.waiting)
       queue:in({Pid,
                 Timeout,
                 os:timestamp()},  % os:timestamp() returns 18:00:04 !!!
               Queue).

% 18:00:05 - gen_server:call fall because of timeout

% 18:00:06 somebody return worker to pool, by poolboy:checkin()
handle_checkin(...)
   case wait_valid("18:00:04" = StartTime, 5000 = Timeout) ...
      % and it returns true, because (18:00:06 - 18:00:04) < 5000

Possible solutions:
calculate StartTime in poolboy:checkout

checkout(Pool, Block, Timeout) ->
    Deadline = os:timestamp() + Timeout,  
    gen_server:call(Pool, {checkout, Block, Deadline}, Timeout).

AND add some small amount of time for roundtrips like

wait_valid(Deadline) ->
    TimeToDeadline = timer:now_diff(DeadLine, os:timestamp()),
    TimeToDeadline > ?ROUNDTRIP_OVERHEAD_TIME.

0.9.1 won't install with rebar (version is incorrectly listed as 0.8.1)

In my rebar.config:

    %% Generic connection pooling (for Riak).
    {poolboy,
     "0.9.1",
     {git,
      "git://github.com/devinus/poolboy.git",
      "0.9.1"}},

But when I run rebar get-deps:

Pulling poolboy from {git,"git://github.com/devinus/poolboy.git","0.9.1"}
Cloning into 'poolboy'...
ERROR: Dependency dir /home/vagrant/flap/deps/poolboy failed application validation with reason:
{version_mismatch,{"/home/vagrant/flap/deps/poolboy/src/poolboy.app.src",
{expected,"0.9.1"},
{has,"0.8.1"}}}.
ERROR: 'get-deps' failed while processing /home/vagrant/flap: rebar_abort

It looks like 0.9.1 just added some Elixir config files... but it doesn't install as 0.9.1 in a regular OTP application.

Improvements to Project's README Introduction

Hello,

After taking a look at poolboy's README, I think this project could benefit from a more detailed introduction and mission statement on what the software accomplishes. This would greatly help with drawing in new contributors or people who may be less familiar with how pooling libraries work.

One thing that would also be helpful in this regard would be including a more detailed description on how using this software could be beneficial in different scenarios, as jumping straight into code may not necessarily give a new contributor this information.

This documentation on open source projects may be helpful in terms of how to draw in users with a strong mission statement and introduction.
https://producingoss.com/en/getting-started.html#mission-statement

Thank you for taking the time to read this!

NOTE: This improvement idea was discovered during a gender inclusivity project that was performed on this project for an open source course.

Pool statistics

When observing a running system, it would be nice if poolboy provided statistics on the usage of the pool. Has the pool ever been empty? How long are processes checked out for? How often are pooled processes (or their temporary owners) crashing? I suspect there are several other things it would be useful to have metrics for. The simplest that would be personally useful for me is just a simple max_workers_ever, which a function to reset the stat. Do others think these kind of things would be useful? Would observing such things with something like folsom have too much overhead?

cc/ @Vagabond

dismiss worker on timeout or other error

when in the following case

checkout
try
    use
after
    checkin
end

if timeout on "use" or other error occured, perhaps the worker is -

  • busy, not sure if it is ready next time
  • wrong, not usable anymore
  • good, but, maybe it's better to dismiss it to let it have a rest and start a fresh one after it run X tasks. I see tcp_close sometimes when using with pgsql, not sure what happened but this option might help?

so it would be better to add an option to dismiss the worker. I added a dismiss/2 and if it sounds good I can test it and make a pull request.

Thanks.

diff --git a/src/poolboy.erl b/src/poolboy.erl
index b482828..2267282 100644
--- a/src/poolboy.erl
+++ b/src/poolboy.erl
@@ -5,7 +5,7 @@

 -export([checkout/1, checkout/2, checkout/3, checkin/2, transaction/2,
          transaction/3, child_spec/2, child_spec/3, start/1, start/2,
-         start_link/1, start_link/2, stop/1, status/1]).
+         start_link/1, start_link/2, stop/1, status/1, dismiss/2]).
 -export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2,
          code_change/3]).

@@ -42,6 +42,10 @@ checkout(Pool, Block, Timeout) ->
 checkin(Pool, Worker) when is_pid(Worker) ->
     gen_server:cast(Pool, {checkin, Worker}).

+-spec dismiss(Pool :: node(), Worker :: pid()) -> ok.
+dismiss(Pool, Worker) when is_pid(Worker) ->
+    gen_server:cast(Pool, {dismiss, Worker}).
+
 -spec transaction(Pool :: node(), Fun :: fun((Worker :: pid()) -> any()))
     -> any().
 transaction(Pool, Fun) ->
@@ -131,6 +135,17 @@ handle_cast({checkin, Pid}, State = #state{monitors = Monitors}) ->
             {noreply, State}
     end;

+handle_cast({dismiss, Pid}, State = #state{monitors = Monitors, supervisor = Sup}) ->
+    case ets:lookup(Monitors, Pid) of
+        [{Pid, Ref}] ->
+            true = erlang:demonitor(Ref),
+            true = ets:delete(Monitors, Pid),
+            ok = dismiss_worker(Sup, Pid);
+        [] ->
+            ok
+    end,
+    {noreply, State};
+
 handle_cast(_Msg, State) ->
     {noreply, State}.

Stacktrace incorrect in checkout gen_server call

The stacktrace points to the wrong place if the checkout gen_server call exits because of a non-timeout reason to a named pool.

https://github.com/devinus/poolboy/blob/master/src/poolboy.erl#L52-L60

** (exit) exited in: :gen_server.call(Ellexir.Repo.Pool, {:checkout, #Reference<0.0.1.102>, true}, 5000)
    ** (EXIT) no process
    :erlang.send(Ellexir.Repo.Pool, {:"$gen_cast", {:cancel_waiting, #Reference<0.0.1.102>}}, [:noconnect])
    (stdlib) gen_server.erl:416: :gen_server.do_send/2
    (stdlib) gen_server.erl:232: :gen_server.do_cast/2
    (poolboy) src/poolboy.erl:58: :poolboy.checkout/3
    (db_connection) lib/db_connection/poolboy.ex:36: DBConnection.Poolboy.checkout/2
    (db_connection) lib/db_connection.ex:811: DBConnection.checkout/2
    (db_connection) lib/db_connection.ex:717: DBConnection.run/3
    (db_connection) lib/db_connection.ex:957: DBConnection.run_meter/3

/cc @fishcakez

brod-poolboy integration

Well, I'm building a service which uses brod as the producer as well as consumer client for Kafka message broker. And I'm trying to handle the worker processes through your poolboy.
Now I'm stuck at where to call poolboy (in brod) to pass the messages (from the producer on the other side) and then get the response from poolboy to brod.

I understand how poolboy works, but ain't sure how to pass the request from brod to poolboy. Please note that I'm able to pass requests to poolboy explicitly (through commands), but I need to get it done through brod (so that it doesn't have to publish messages manually using commands for each operation). This is being done as a part of a microservice backend project.

How do I accomplish this? How do I include poolboy as a dependency on brod?
Thanks!

epgsql example doesn't recover gracefully from postgresql restarts

I've used the epgsql pool example from your readme. I don't understand why you are trapping the exit signal in the pgsql_connection worker. If I remove that line, a restart/full stop of postgres is detected and results in the whole application being shut down. If, OTOH, I keep the line, the next time a process queries the database, that process will crash. I don't know if that's intended behaviour, but the behaviour I would expect is that processes using an epgsql pool crash only if they submitted a faulty query or if the database server crashed at the very moment of the request.

Related question: Do you have a suggestion to prevent the whole supervisor/application from shutting down when the database is restarted? I am not sure about weather/where to introduce a delay in the code.

What is max_overflow?

The documentation states:

max_overflow: maximum number of workers created if pool is empty

What does this mean? When might this happen? How is this different from max_size?

Is Poolboy supported on Windows?

I tried using poolboy in my Elixir program on Windows. Mix fails to compile it. Something about rebar.


>mix deps.compile
==> poolboy (compile)
Compiling src/poolboy.erl failed:
src/poolboy.erl:17: type queue() undefined
ERROR: compile failed while processing c:/Users/my_user/Projects/my_app/deps/poolboy: rebar_abort
** (Mix) Could not compile dependency :poolboy, "escript.exe "c:/Users/my_user/.mix/rebar"" command failed. You can recompile this dependency with "mix deps.compile poolboy", update it with "mix deps.update poolboy" or clean it with "mix deps.clean poolboy"

On Mac it complied fine:


$ mix
Could not find "rebar", which is needed to build dependency :poolboy
I can install a local copy which is just used by Mix
Shall I install rebar? [Yn] Y
* creating /Users/my_user/.mix/rebar
==> poolboy (compile)
Compiled src/poolboy_sup.erl
Compiled src/poolboy_worker.erl
Compiled src/poolboy.erl
Compiled lib/my_app.ex
Generated my_app app

Configurable worker dismiss timeout

I would like to have a configurable timeout before an overflow worker is dismissed.

Since starting a worker can be an expensive process (e.g. establishing a db connection), with a certain load on the pool workers are started and stopped continuously, because the waiting queue might be empty but 1msec later it might be filled again. In this case, poolboy would have stopped a worker but milliseconds later need to spin it up again, causing a performance decrease due to the worker startup overhead.

My proposal is to add a timeout so that workers are only dismissed after the timeout has passed and the queue is empty at that point in time.

Timeout before killing overflow workers

Currently poolboy will immediately kill unused overflow workers if there is no work left in the queue. If workers are expensive to create, this design can lead to excessive worker churn. It would be better to have a configurable timeout which lets unused overflow workers linger for a while before being killed, in case new work arrives. See #29 for additional discussion.

Improving the performance of Poolboy up 2 to 3 times

Hi, I've written a pooler to test some things and saw that it outperformed poolboy on most use cases, including what I think is the most normal one, a given timeout for the checkout and no waiting queue at all.

I've set up some tests spawning 150k processes, waiting 2secs, repeating this 8 times (I've tested different configurations too anyway varying the repeats, waiting time and total processes).
Each process tries to check out a connection, with a timeout of 3s and no waiting queue.
If it is able to checkout then it sends a message and sleeps for 250ms to "simulate" a real waiting time (I can't use random waits because then the tests are meaningless).

What I arrived at is for instance, using 50workers for both poolers (this was tested warm, in different orders, cold, and several times):

boy: OK: 796 Failed: 1199204 Time: 10.091
queuer: OK: 837 Failed: 1199163 Time: 3.8459999999999996

And they're always in those ranges, slightly more processed, about 1/3 of the time spent.
Regarding CPU & Mem utilisation, the gains also seem to be significant (I imagine because the VM doesn't have to set so many call timers, and undo them on timeout, much less exceptions being raised, much less msgs flowing, etc).

So then I implemented the same preemptive check in poolboy to see if it was overloaded before making the attempt to checkout, and with that check poolboy even became slightly faster than my pooler on avg, which means this is something that can be implemented with no changes to client side code for 2 to 3 times faster checkout when overloaded (it doesn't work in unbound queues because they can't get overloaded... in those cases it will keep the same profile as it has now).

There's only one slight change that needs to be decided, which is how to handle checkouts coming from remote nodes, either bypass the pre-check or instead do a rpc, it's also easily solvable.

Because poolboy is used in several places within Elixir packages I would like to write a PR for this if there's interest? The changes are not complex at all.

new_worker failed cause poolby crashed

Hi,

We use poolboy to manage redis connection pool, in production environment find redis connection closed sometimes.

Checked code in poolboy.erl line 275, find if new_worker failed, in our example, maybe redis start failed due to network error, it will cause poolboy crashed, all other existing redis connections will be closed.

Maybe new_worker should deal with failed case, and shouldn't affect the other processes managed by poolboy.

node() should not be used in the type specs

The type spec for several functions in poolboy.erl expect to be passed a node name as the first argument (Pool). This is passed back into poolboy from the output of gen_server:start_link, which should be {ok, pid()} (or an error, obviously).

The spec should be changed from node() to pid().

Worker is stopped without restart

For below code

handle_info({'DOWN', Ref, _, _, _}, StateName, State) ->
case ets:match(State#state.monitors, {'$1', Ref}) of
[[Pid]] ->
Sup = State#state.supervisor,
ok = supervisor:terminate_child(Sup, Pid),
{next_state, StateName, State};
[] ->
{next_state, StateName, State}
end;

If the current state is full, one worker is stopped and the state can't be changed. So even all workers are stopped, the state will always be kept as full.

It's unexpected.

What I think we can do here is check wether we can change the state and restart a new worker to handle the request in waiting list.

No Labels for Issues

There are no indications as to which issues are for "beginners" or a "good first issue" for those that are just starting to get involved in the project. Adding these in could facilitate new contributors to help them feel more comfortable with getting involved. It helps people that would feel more comfortable with finding an issue that would be appropriate for them to get their feet wet.

Why not verify waiting process is alive when send pool member to it ?

When a process (called as ProcessA) call checkout, poolboy will take it in waiting queue if there is no member, then poolboy monitor ProcessA .
If ProcessA crash down, poolboy will receive the 'DOWN' message and take it out from waiting queue. Other process(called as ProcessB) call checkin, poolboy will check waiting queue, and send the member to one. Here is no problem.
But, consider a scenario:

----------------------------------------|-------------|
                                        |             |
        message1  --->  message2        |   poolboy   |
                                        |             |
----------------------------------------|-------------|
        message queue                     gen_server process

The message1 is ProcessA 'DOWN', the message2 is ProcessB checkin, and poolboy call queue:out(Waiting) get result

{{value, {{ProcessA, _}, CRef, MRef}}, Left}

The problem is here, if we not verify the ProcessA alive before send member to it, the member will not return because the ProcessA already DOWN. Poolboy gen_server process just not handle the 'DOWN' message(message1) though the message has been put into process mailbox.

Allow worker arguments of any type

There is no sense to restrict worker arguments to proplists. While they are opaque to poolboy and are passed as is, they could be anything.

How to get the list of workers

Hey,

How to get the list of workers for the specific pool ? The possible solution I found is:

5> element(3, sys:get_state(whereis(poolname))).
[<0.430.0>,<0.429.0>,<0.428.0>,<0.427.0>,<0.426.0>]

It would be good to have a function like poolboy:workers/1 for that.

Question: Is it possible to use an alternative process registry for worker processes?

Is it possible to use any other process registry than local for worker processes and if so, how would you go about it?

I would like to have run the worker processes in global for instance. I can't see how this would be done, based on the example. From the example the worker module starts using gen_server:start_link/3 instead of gen_server:start_link/4, which makes me wonder if poolboy is able to give the worker a global name?

Thanks in advance,

Mattias

non Semantic Version `cat VERSION`

Using Elixir mix, I get the following error:

** (Mix) The application poolboy specified a non Semantic Version cat VERSION. Mix can only match the requirement ~> 1.2.1 against Semantic Versions, to match against any version, please use a regex as requirement

I think it is referring to this code: https://github.com/devinus/poolboy/blob/master/src/poolboy.app.src#L3

I know that this is not a Elixir library, but Ecto/postgrex on Elixir depends on it.

Is it possible to get a fix to this?

Runtime size changes

It would be nice to be able to change the size and max_overflow parameters at runtime. This would be useful for situations where you have long-running processes that it would be inconvenient to be interrupted by a node reboot. I suspect, however, that this change would be more complicated than a simple:

update_size(PoolPid, Size, MaxOverflow) ->
  gen_server:call(PoolPid, {update_state, {Size, MaxOverflow}}).

handle_call({update_state, _From, {Size, MaxOverflow}}, State) ->
  UpdatedState = State#state{size=Size, max_overflow=MaxOverFlow},  
  {reply, ok, UpdatedState}.

but I could be over-thinking it. I'm wondering about issues like: what happens if you shrink the pool size, do you kill workers? Just wait for them to be returned to the pool and then gracefully dismiss them? What happens if you raise the base pool size, do you pre-populate again?

Thoughts?

Contributing Doc or Information

Hello,
While looking at this project for an open source class looking at a software's inclusiveness, something that was found that could be beneficial would be creating some guidelines on how a person could help contribute to this project.

Whether it be basic instructions on fixing existing issues or working on enhancements, or even some information on who or how to contact people that could provide additional information could be quite helpful.

rebar

Not sure if it really is a issue.

But it would be nice if rebar was in the repository. I usually don't have rebar when I start machines and use Makefile to compile code (without rebar).

Is it possible to have rebar in the repository?

Issues for readme.md

Hello,
After taking a look at poolboy's README, I think this project lack some detailed introduction and mission statement on what the software accomplishes. I did not know how to run this software and how to use this software after I downloading it. Also, for the contributors, like me, a beginner, I did not see any issues have any tags on it. Other project all have the tags like beginner, C++, or python tags under each issue, and this could make the contributors easy to find the one they could fix.

Minor bug in 'wait_valid()'

wait_valid(infinity, _Timeout) ->
    true;
wait_valid(StartTime, Timeout) ->
    Waited = timer:now_diff(os:timestamp(), StartTime),
    (Waited div 1000) < Timeout.

In the first line, I believe infinity should be matching the Timeout and not the StartTime.
Interestingly, this bug goes unnoticed because (Waited div 1000) < Timeout actually works as expected when Timeout is the atom infinity due to Erlang term sort-ordering rules.

pool size question

Hello. I'm using poolboy with epgsql. And I have two windows with erl.exe running two servers each having their own database on one postgresql-server. They have [{size,20},{max_overflow,70}] and [40,50] respectively. Both dbs have max_connections=100.
The problem is that I receive 'too_many_connections' (rarely). And I can't understand how is that possible. I suppose that this is issue of poolboy, not epgsql.

Reuse of workers with low values of size

I'm using Chicago Boss, and got curious about how I could make its initial memory usage a little bit lower. One obvious thing would be to start the database connection pool off with one worker, rather than five, which is probably fine, since this is a semi-embedded system where we won't normally have concurrent users. max_overflow was set to 20. Out of curiosity, I ran some simple benchmarks with "ab -c 10 -n 1000" (1000 requests, running 10 at a time), and noticed that dropping the size below 10 slows things down.

After some digging, I realized that poolboy is constantly tearing down and creating new connections. Something about this does not seem quite right: if I'm getting a lot of requests right now, I can expect to keep getting them for the immediate future, so I don't want to shut them down immediately.

I'm not exactly sure how things ought to work, but I'd expect that workers would go away slowly, rather than reaped quickly.

Potential race in poolboy:stop

I found this when playing with the testing code using my in-house concurrency testing tool (will be released soon!).

poolboy:stop/1 is implemented using gen_server:call/2, which would return to the caller before releasing registered name (I confirmed in the gen_server code). Thus the testing code could be flaky due to failure of new_pool/2, when it's called after poolboy:stop returned but before the process is really killed.

In realistic usage, I think it would cause failure if someone wants to restart a named pool using poolboy:stop. A better way of terminating a gen_server would be using gen_server:stop/1 (which will wait for the process to really end).

BTW, there is also a flaky test of reuses_waiting_monitor_on_worker_exit, the test would fail if get_monitors(Pool) is called after Self ! {worker, Worker} in the inner process, but before poolboy:checkout(Pool).

whats the main deference of self() and CRef in checkout?

I see comment say that if use self(), it was possible for the pool to checkout a worker to a client but the client to timeout before receiving the worker's pid. This would leave the worker checked out until the client process exited.
I dont know in what condtion, self() would lead to cacnel_waiting in catech clause not worked.

Weird Timeout logic question

I can not understand logic behind checkout function. One value (Timeout) is used as both miliseconds in gen_server:call and microseconds in Deadline.
Is this weird logic intended?

checkout(Pool, Block, Timeout) ->
    {MegaSecs, Secs, MicroSecs} = os:timestamp(),
    Deadline = {MegaSecs, Secs, MicroSecs + Timeout},
    gen_server:call(Pool, {checkout, Block, Deadline}, Timeout).

Poolboy worker should set trap_exit flag if it use terminate/2 function.

The terminate/2 function in the example_worker.erl will never be called and pgsql connection never closed.
You should use:

init(Args) ->
    process_flag(trap_exit, true),
    ...

http://www.erlang.org/doc/man/gen_server.html#Module:terminate-2

If the gen_server is part of a supervision tree and is ordered by its supervisor to terminate, this function will be called with Reason=shutdown if the following conditions apply:

  • the gen_server has been set to trap exit signals, and
  • the shutdown strategy as defined in the supervisor's child specification is an integer timeout value, not brutal_kill.

Even if the gen_server is not part of a supervision tree, this function will be called if it receives an 'EXIT' message from its parent. Reason will be the same as in the 'EXIT' message.

Otherwise, the gen_server will be immediately terminated.

Failure to start worker crashes pool

In the case that a worker cannot be started (for example, connection limit of a database reached), the entire pool will crash. This is tragical with overflow workers, when requesting an overflow worker will crash the pool and all the linked existing workers.

In my case, I'm working in a tight environment where other things running on the same machine may simply exhaust the local ports. A new connection to the database is then not possible, which is somewhat ok, can always try later. However, I wouldn't know that unless starting an overflow worker is attempted, and then all the existing connections in the pool will crash.

If the caller dies, the worker is returned to the pool, even if it's still busy

I might be reading the source code wrongly, but it appears to me that if I do something like this:

x() ->
  W = poolboy:checkout(P, true),
  gen_server:call(W, Req),
  poolboy:checkin(P, W).

If x dies while inside gen_server:call (can it actually do that?), then W is returned to the pool, but it's still busy. That means that when someone checks it out later, their gen_server:call will be queued behind this one.

Is there any way to get around this?

Allow poolboy to manage supervisors

I'm using a library that generates connections as supervisors because there's multiple processes to manage the connection. Technically, the connection is a child of the supervisor, but every function that expect a process to be passed to it, expects the supervisor. Thus, I'd like to manage a pool of supervisors instead of workers. The problem I ran into is that poolboy's child_spec function will only create workers. I've worked around this by defining my own child spec. However, this seems plausible as something others would want to do.

One question I have is, is this a horrible idea because of something inherent to the way poolboy works?

If it's not, I'd be happy to contribute this feature.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.