Giter Club home page Giter Club logo

Comments (3)

jrudolph avatar jrudolph commented on August 17, 2024

As described in the original PR, the buffer size used to be calculated as

val targetBufferSize = settings.maxOpenRequests - settings.maxConnections

The reasoning was that there can be one request going on in each of the connections, plus all the extra ones you want to have buffered by the configuration of max-open-requests, so that the total number of requests you would be able to submit to a pool would indeed be max-open-requests. However, it turned out that in some cases, the connections would not accept requests during ongoing connection errors, so that in fact you would not be able to submit max-open-requests concurrently to a pool.

Prior to the mentioned change, N = max-open-connections

That was only true if max-open-requests was set to a lower value than max-connections.

After the change, we went back to a previous behavior where we again allow max-open-requests in front of the pool, so that in the best case, indeed N = max-open-requests + max-connections.

In any case, max-open-requests is not an exact setting because of the buffering involved and more like a lower bound of how many concurrent requests you are expected to submit to the pool without occurring an overflow.

I wonder how increasing the buffer can be a problem in your case because we are handing out less errors than before? Can you show a test case that fails with the new behavior?

I can see how the description of the setting as "The maximum number of open requests accepted into the pool" does not really fit my description above as a lower bound (instead of an upper bound). In any case, if you really want to enforce an upper limit you can always wrap the interface and count requests yourself if you want to err out fast. Though, it's not quite clear to me when you would really want to be so strict.

After all, the whole client side pool interface architecture has its downsides (many for legacy reasons) and would benefit from a proper streaming interface. On the other hand, even (or maybe rather especially) with a streaming interface you also get all internal buffers so that we would not recommend relying on an exact buffer management in any case. Also, a streaming interface with backpressure has its own challenges in the more complex cases (e.g. head-of-line blocking with super-pools, accidentally unrestricted buffers by just adding streams to a common pool, needed buffers for internal retries etc.).

from pekko-http.

jphelp32 avatar jphelp32 commented on August 17, 2024

Thanks @jrudolph. Our use case is a reverse proxy api gateway service. We currently rely upon max-open-requests and max-connections as a service protection mechanism to fail fast when a target backend service is in duress but does not itself have the capability to fail fast. I’m not sure that we have a strong opinion on whether the configuration should be an upper-bound or a lower-bound. We just need to understand which is the expected behavior, and be able to verify that behavior with unit tests. To my knowledge, I don’t think our existing unit tests (based on 10.2.9 behavior) ever encountered the scenario that prompted the change. If it ever occurred in production, we didn’t notice.
So would you say that this is working as designed and will not be changed going forward? Perhaps only a documentation update would be forthcoming to clarify the behavior to be expected?

from pekko-http.

pjfanning avatar pjfanning commented on August 17, 2024

Akka issue has not led to a change yet. So far, the new behaviour is seeming to be bedded in.

from pekko-http.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.