Giter Club home page Giter Club logo

Comments (6)

djc avatar djc commented on July 18, 2024

I guess I'm using the async mutex for the other reason, namely that I don't want bb8 to block the application task if the mutex is held by another task. Are you saying that's not a good reason to use an async mutex?

Edit: also, thanks for reviewing the code at this level of detail!

from bb8.

Darksonn avatar Darksonn commented on July 18, 2024

As long as the mutex is only ever held locked for very short amounts of time, the amount of blocking the mutex introduces is not an issue. The issue with blocking the thread is that the task doesn't yield back to the executor, thus not allowing other tasks to run. However, very short durations of blocking do not cause these issues. This is for the same reason that it is ok to perform small amounts of CPU-bound code inside an asynchronous task. Note that asynchronous mutexes have a synchronous mutex inside.

I have written a bit about this here, and we also talk about it in the mini-redis example. Additionally, our new tutorial has a section on this.

It is also worth mentioning that your destructor can deadlock in some edge cases, as Tokio's mutex hooks into Tokio's coop system.

from bb8.

djc avatar djc commented on July 18, 2024

Thanks, that's helpful. I would argue in this case "Additionally, when you do want shared access to an IO resource, it is often better to spawn a task to manage the IO resource, and to use message passing to communicate with that task." is applicable (and I think this would likely make the implementation more clear). What do you think?

from bb8.

Darksonn avatar Darksonn commented on July 18, 2024

That is certainly also a reasonable approach. The main disadvantage is that it requires spawning, tying you closely to the executor, but you are already doing that. In my opinion, both are reasonable approaches here.

If you want to pursue this direction, you could quite reasonably make this part of the schedule_reaping task. Here is one reasonable way to approach it:

  1. Use a bounded mpsc channel to submit requests for connections. The message should include an oneshot channel for responding with the connection.
  2. For returning connections after using them, you can use another mpsc channel to send them to the task. This must be an unbounded channel, as you cannot otherwise send on the channel from a destructor. I don't think this is an issue, as backpressure here is not necessary.
  3. When no connections are available, you can either open the connection in the task and respond with the new connection, or respond with a message that tells the requester to open a connection themselves. The exact approach depends on whether you want to be opening multiple connections at the same time.

I imagine your task could look something like this:

let mut open_connections = /* some sort of container */;
loop {
    tokio::select! {
        msg = connection_request.recv() => {
            if let Some(oneshot) = msg {
                oneshot.send(open_connections.get_connection());
            } else {
                // the pool has been dropped
                return;
            }
        },
        msg = return_connection.recv() => {
            // you will be keeping a sender alive in `open_connections`,
            // so this can't fail
            open_connections.put_back(msg.unwrap());
        },
        _ = reaping_interval.tick(), if !open_connections.is_empty() => {
            open_connections.reap();
        },
    }
}

Since open_connections is owned only by the task, no mutex is necessary. You can ignore send errors from the destructor, as those just mean the pool has been dropped, and the task has exited.

from bb8.

djc avatar djc commented on July 18, 2024

Thanks! Do you have any intuition for which approach will result in lower latencies?

from bb8.

Darksonn avatar Darksonn commented on July 18, 2024

Not really.

from bb8.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.