smol-rs / smol Goto Github PK
View Code? Open in Web Editor NEWA small and fast async runtime for Rust
License: Apache License 2.0
A small and fast async runtime for Rust
License: Apache License 2.0
Recently I read that zCore has implemented async/await support on OS level by making their own executor, proving that async runtime can be run on no_std
. Can we also support no_std
?
Hi!
I'm currently experimenting by attempting to run futures created with smol
in other async runtimes.
In this case I'm having a go at writing to an Async<UdpSocket>
once per second. I've noticed that writing to the Async<UdpSocket>
does seem to work nicely. However, the Timer
future does not seem to make progress in the tokio
runtime. If I instead create my "once-per-second" stream using futures-timer
, everything seems to work as expected.
Is this expected behaviour? I'm still quite new to async Rust so feel free to tell me if I'm being silly! Otherwise let me know if you'd like me to assemble a small test case and I'll put something together next time I get the chance.
Edit: I should note that I did try enabling the "tokio02"
feature but it didn't seem to have an effect - I guess because this feature is aimed running tokio futures inside a smol runtime and not the other way around?
Edit 2: I just tested with async-std
instead of tokio
and in this case it seems the smol::Timer
does work.
I'm really liking this pattern of using same types in both sync and async world. Reduces the sync/async friction to some degree in my opinion.
let socket = Async::<TcpStream>::connect("127.0.0.1:8080").await?;
Along the same lines, is it possible to have channel Sender
and Receiver
that can be easily used in both async and sync ecosystems.
In the projects I'm working on, some threads are synchronous and some are async. I'm using tokio/futures channels to communicate between these. While using tx or rx in a synchronous thread, I either use futures executor or wrap the function with tokio::main
(even though the code is completely synchronous except for channel) and do tx.send().await
. I'm not sure of the extra perf costs due to this while sending data at a very high frequency, it feels the async/sync boundaries I'm trying to maintain are shattered.
Is there a way to create channel which can be easily converted into async like this?
let (tx, rx) = channel::<i32>();
let tx = tx.async();
let rx = rx.async();
Is this something that we need? Thoughts? I guess I'm just looking for someone to say "hey yes let's add this"... :)
Why is it okay to drop any error returned from Socket::connect
? It could be any of http://man7.org/linux/man-pages/man2/connect.2.html#ERRORS, and a user might be interested in handling this properly
According to documentation here we should be able to cancel a task that is running in a forever loop with Detach(). That can never work since detach returns ()
I'm considering renaming the examples
directory to something else, perhaps demos
. I find two things about the current setup inconvenient:
It's hard to add new examples because we need to add an [[example]]
section per example to examples/Cargo.toml
. I find this inconvenient because sometimes it's nice to add a quick temporary example for testing.
You need to be in the examples
directory to run examples.
A possible solution to this would be to:
examples
to demo
autoexamples = false
demo
to default-members
list in the [workspace]
sectionIn that case, we could do cargo run --bin ctrl-c
from the root of the repository.
Would it make sense to replace queue: VecDeque<Runnable>
in the blocking executor with a multi-consumer channel/queue? In the current implementation, there can be a good amount of contention on the state mutex when spawning a lot of blocking tasks.
So I've got a smol
thread pool that I've started using a piper::Event
derive construct to block the run()
function and make sure the thread pool stays open until the shutdown flag is set to true
.
I also have some background tasks that do stuff like listen waiting on channels for stuff they need, etc.
My question is what happens to the tasks such as the background tasks that are sitting around waiting for channels when the run()
function exits because a shutdown has been triggered the Event
unblocks?
Do all of the futures that are in progress get canceled?
Smol is not correctly simulating edge triggering on Windows. Although it uses EPOLLONESHOT
, it always reregister
s, so wait
always returns immediately.
I think this is the underlying issue of async-rs/async-std#773.
First of all, thanks for having a code of conduct at all!
Still contains the placeholder value "[INSERT CONTACT METHOD]" when explaining how to contact the "community leaders responsible for enforcement".
To be clear, I don't need to use that right now, as I'm only just reading smol for the first time and haven't even had any community interaction yet. But it seems like something worth fixing before somebody does want to bring up an issue.
Since Async::new()
should really only be used for networking and not for stdin/stderr/stdout/files, what if we renamed Async::new()
to Async::socket()
to prevent people from putting files etc. inside Async
and expecting things to work?
That said, there are some things that aren't sockets and can be used with Async
like timerfd and notify, but we can say those are socket-like things.
On Windows, Async::new()
only accepts T: AsRawSocket
so you really can only use sockets with it - the situation there is perfect. On Unix, there's just AsRawFd
and there's not really any way to distinguish between sockets and other kinds of file descriptors.
I was playing around with the async-h1 server example and found that if the client process dies mid connection it kill the executor with this error. This is from running single threaded, where the process exists after this. If it's run multi threaded like in the example that executor thread dies.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Connection reset by peer (os error 104)', /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/task.rs:162:29
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/libunwind.rs:86
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.44/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:78
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1063
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1426
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:204
9: std::panicking::default_hook
at src/libstd/panicking.rs:224
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:470
11: rust_begin_unwind
at src/libstd/panicking.rs:378
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
13: core::option::expect_none_failed
at src/libcore/option.rs:1211
14: core::result::Result<T,E>::unwrap
at /rustc/4fb7144ed159f94491249e86d5bbd033b5d60550/src/libcore/result.rs:1003
15: smol::task::Task<core::result::Result<T,E>>::unwrap::{{closure}}
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/task.rs:162
16: <std::future::GenFuture<T> as core::future::future::Future>::poll
at /rustc/4fb7144ed159f94491249e86d5bbd033b5d60550/src/libstd/future.rs:44
17: async_task::raw::RawTask<F,R,S,T>::run
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/async-task-3.0.0/src/raw.rs:502
18: async_task::task::Task<T>::run
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/async-task-3.0.0/src/task.rs:265
19: smol::work_stealing::Worker::execute::{{closure}}
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/work_stealing.rs:176
20: scoped_tls_hkt::ScopedKey<T>::set
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/scoped-tls-hkt-0.1.2/src/lib.rs:488
21: smol::throttle::setup
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/throttle.rs:31
22: smol::work_stealing::Worker::execute
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/work_stealing.rs:176
23: smol::run::run::{{closure}}
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/run.rs:150
24: core::ops::function::FnOnce::call_once
at /rustc/4fb7144ed159f94491249e86d5bbd033b5d60550/src/libcore/ops/function.rs:232
25: smol::context::enter
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/context.rs:8
26: smol::run::run::{{closure}}::{{closure}}
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/run.rs:111
27: scoped_tls_hkt::ScopedKey<T>::set
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/scoped-tls-hkt-0.1.2/src/lib.rs:488
28: smol::thread_local::ThreadLocalExecutor::enter
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/thread_local.rs:57
29: smol::run::run::{{closure}}
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/run.rs:111
30: smol::run::run::{{closure}}::{{closure}}
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/run.rs:112
31: smol::work_stealing::WORKER::<impl smol::work_stealing::WORKER>::set
at ./<::scoped_tls_hkt::scoped_thread_local macros>:40
32: smol::work_stealing::Worker::enter
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/work_stealing.rs:149
33: smol::run::run::{{closure}}
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/run.rs:112
34: smol::run::run
at /home/zethra/.cargo/registry/src/github.com-1ecc6299db9ec823/smol-0.1.4/src/run.rs:114
35: servy::main
at src/main.rs:74
36: std::rt::lang_start::{{closure}}
at /rustc/4fb7144ed159f94491249e86d5bbd033b5d60550/src/libstd/rt.rs:67
37: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:52
38: std::panicking::try::do_call
at src/libstd/panicking.rs:303
39: __rust_maybe_catch_panic
at src/libpanic_unwind/lib.rs:86
40: std::panicking::try
at src/libstd/panicking.rs:281
41: std::panic::catch_unwind
at src/libstd/panic.rs:394
42: std::rt::lang_start_internal
at src/libstd/rt.rs:51
43: std::rt::lang_start
at /rustc/4fb7144ed159f94491249e86d5bbd033b5d60550/src/libstd/rt.rs:67
44: main
45: __libc_start_main
46: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Running in Linux x86_64
Edit: I copied that example into a project named servy
, that's why that name is in there.
Ran cargo run --example chat-server
and the following error occurred:
error: failed to run custom build command for `wepoll-sys v1.0.4`
Caused by:
process didn't exit successfully: `C:\Users\yoshu\Code\smol\target\debug\build\wepoll-sys-117e2f9800108a96\build-script-build` (exit code: 101)
--- stdout
TARGET = Some("x86_64-pc-windows-msvc")
OPT_LEVEL = Some("0")
HOST = Some("x86_64-pc-windows-msvc")
CC_x86_64-pc-windows-msvc = None
CC_x86_64_pc_windows_msvc = None
HOST_CC = None
CC = None
CFLAGS_x86_64-pc-windows-msvc = None
CFLAGS_x86_64_pc_windows_msvc = None
HOST_CFLAGS = None
CFLAGS = None
CRATE_CC_NO_DEFAULTS = None
CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
DEBUG = Some("true")
running: "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.24.28314\\bin\\HostX64\\x64\\cl.exe" "-nologo" "-MD" "-Z7" "-Brepro" "-I" "C:\\Users\\yoshu\\Code\\smol\\target\\debug\\build\\wepoll-sys-1dae2e165fd4cfb0\\out\\wepoll-build" "-W4" "-FoC:\\Users\\yoshu\\Code\\smol\\target\\debug\\build\\wepoll-sys-1dae2e165fd4cfb0\\out\\wepoll-build\\wepoll.o" "-c" "C:\\Users\\yoshu\\Code\\smol\\target\\debug\\build\\wepoll-sys-1dae2e165fd4cfb0\\out\\wepoll-build\\wepoll.c"
wepoll.c
exit code: 0
AR_x86_64-pc-windows-msvc = None
AR_x86_64_pc_windows_msvc = None
HOST_AR = None
AR = None
running: "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Tools\\MSVC\\14.24.28314\\bin\\HostX64\\x64\\lib.exe" "-out:C:\\Users\\yoshu\\Code\\smol\\target\\debug\\build\\wepoll-sys-1dae2e165fd4cfb0\\out\\wepoll-build\\libwepoll.a" "-nologo" "C:\\Users\\yoshu\\Code\\smol\\target\\debug\\build\\wepoll-sys-1dae2e165fd4cfb0\\out\\wepoll-build\\wepoll.o"
exit code: 0
cargo:rustc-link-lib=static=wepoll
cargo:rustc-link-search=native=C:\Users\yoshu\Code\smol\target\debug\build\wepoll-sys-1dae2e165fd4cfb0\out\wepoll-build
cargo:rustc-link-lib=static=wepoll
cargo:rustc-link-search=C:\Users\yoshu\Code\smol\target\debug\build\wepoll-sys-1dae2e165fd4cfb0\out\wepoll-build
cargo:warning=couldn't execute `llvm-config --prefix` (error: The system cannot find the file specified. (os error 2))
cargo:warning=set the LLVM_CONFIG_PATH environment variable to a valid `llvm-config` executable
--- stderr
thread 'main' panicked at 'Unable to find libclang: "couldn\'t find any valid shared libraries matching: [\'clang.dll\', \'libclang.dll\'], set the `LIBCLANG_PATH` environment variable to a path where one of these files can be found (invalid: [])"', C:\Users\yoshu\scoop\persist\rustup\.cargo\registry\src\github.com-1ecc6299db9ec823\bindgen-0.52.0\src/lib.rs:1895:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed
PS C:\Users\yoshu\Code\smol>
I don't have the libc-based toolchain installed locally because it's a pain to setup; I haven't yet confirmed whether it also fails with it installed.
Do you think it's necessary at this stage?
Functions reader()
and writer()
return impl Trait
types, which is not great if we want to store a value of that type somewhere. Perhaps it'd be better to return Reader
and Writer
types that we define in this crate.
Furthermore, it would be nice to have some sugar around common patterns. Writing into a file currently looks like this:
let file = blocking!(File::open("foo.txt"))?;
let mut file = writer(file);
But what if we could do this instead?
let mut file = Writer::file("foo.txt").await?;
If we extend this idea a little bit, what if we had the following API?
struct Reader;
struct Writer;
impl Reader {
fn blocking(r: impl Read + Send + 'static) -> Reader;
fn stdin() -> Reader;
async fn file(p: impl AsRef<Path>) -> io::Result<Reader>
async fn open(opts: OpenOptions) -> io::Result<Reader>;
}
impl Writer {
fn blocking(w: impl Write + Send + 'static) -> Writer;
fn stdout() -> Writer;
fn stderr() -> Writer;
async fn file(path: impl AsRef<Path>) -> io::Result<Writer>;
async fn open(opts: OpenOptions) -> io::Result<Writer>;
}
If we rename Async::new()
to Async::pollable()
, then it'd also make sense to use Reader::blocking()
rather than Reader::new()
to indicate what types the reader (or writer) should be used with.
Previous discussion: #36
I think this issue might be appropriate, as smol is also intended as a project to help people learn about async executors.
Overall, I found the code very easy to follow (thanks @stjepang !). The one place I was completely confused was how/where there was differentiation between awake and asleep tasks.
I finally semi-randomly read the docs for async_task::Task
and realized that after running a task, its Task
ref will be gone if Poll::Pending
, but the schedule
function would take a reference to an awoken task and reschedule it somehow. https://docs.rs/async-task/3.0.0/async_task/struct.Task.html . As soon as I read it I knew what was going on, but since I wasn't familiar with it beforehand it took a while to find this information.
I think I was confused because I expected the scheduling to be handled from the executor side, instead of the task side, if that makes any sense. (i.e. I was trying to figure out the queue in the executor where the asleep tasks were kept).
Anyways, I now understand that the schedule
closure at https://github.com/stjepang/smol/blob/master/src/thread_local.rs#L81 is meant for rescheduling woken up tasks, and not just on the initial spawn. I guess to me schedules a runnable task
was not enough information there, since I've been thinking in terms of waking
.
Perhaps someone else would not have gotten stuck here, but I just wanted to share my experience in case it helps somebody else navigate the code.
Let me know if this feedback is helpful, and if it would be helpful for me to try to clarify in the code comments.
I can't seem to figure out a way to send a message on a channel without boxing. Is there something I'm missing?
Currently Async
supports wrapping things that can be converted pollable resources. In async-native-tls
there is an adapter transforming a resource taht is only std::io{Read|Write}
into AsyncRead|Write
.
I wonder if this could be improved on, cleaned up and possibly put into here, allowing resources that are not pollable, to be wrapped easily.
Ref: https://github.com/async-email/async-native-tls/blob/master/src/std_adapter.rs
So i tested a simple program with valgrind.
use anyhow::Result;
fn main() -> Result<()> {
smol::run(async {
Ok(())
})
}
While I'm not going to brag about deep knowledge about valgrind
Here is some output from valgrind:
==60440== Memcheck, a memory error detector
==60440== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==60440== Using Valgrind-3.15.0-608cb11914-20190413X and LibVEX; rerun with -h for copyright info
==60440== Command: ./target/debug/smol-playground
==60440== Parent PID: 52069
==60440==
--60440--
--60440-- Valgrind options:
--60440-- --log-file=valgrind-output.txt
--60440-- --track-origins=yes
--60440-- --track-fds=yes
--60440-- --leak-check=full
--60440-- --show-reachable=yes
--60440-- --trace-children=yes
--60440-- --read-var-info=yes
--60440-- -v
--60440-- Contents of /proc/version:
--60440-- Linux version 5.6.4-arch1-1 (linux@archlinux) (gcc version 9.3.0 (Arch Linux 9.3.0-1)) #1 SMP PREEMPT Mon, 13 Apr 2020 12:21:19 +0000
--60440--
--60440-- Arch and hwcaps: AMD64, LittleEndian, amd64-cx16-lzcnt-rdtscp-sse3-ssse3-avx-avx2-bmi-f16c-rdrand
--60440-- Page sizes: currently 4096, max supported 4096
--60440-- Valgrind library directory: /usr/lib/valgrind
--60440-- Reading syms from /home/dbuch/dev/rust/smol-playground/target/debug/smol-playground
--60440-- warning: DiCfSI 0x108000 .. 0x108164 outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108000 .. 0x108299 outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108000 .. 0x10813d outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108000 .. 0x10802d outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108000 .. 0x10821f outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108000 .. 0x108001 outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108002 .. 0x108003 outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108004 .. 0x108005 outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108006 .. 0x108006 outside mapped rx segments (NONE)
--60440-- warning: DiCfSI 0x108007 .. 0x108265 outside mapped rx segments (NONE)
parse DIE(readdwarf3.c:3123): confused by:
<0><b>: Abbrev Number: 1 (DW_TAG_compile_unit)
DW_AT_producer : (indirect string, offset: 0x0): clang LLVM (rustc version 1.42.0 (b8cedc004 2020-03-09))
DW_AT_language : 28
DW_AT_name : (indirect string, offset: 0x39): src/main.rs
DW_AT_stmt_list : 0
DW_AT_comp_dir : (indirect string, offset: 0x45): /home/dbuch/dev/rust/smol-playground
DW_AT_??? : 1
DW_AT_low_pc : 0x0
DW_AT_ranges : 48
parse_type_DIE:
--60440-- WARNING: Serious error when reading debug info
--60440-- When reading debug info from /home/dbuch/dev/rust/smol-playground/target/debug/smol-playground:
--60440-- confused by the above DIE
--60440-- Reading syms from /usr/lib/ld-2.31.so
--60440-- Reading syms from /usr/lib/valgrind/memcheck-amd64-linux
--60440-- object doesn't have a dynamic symbol table
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- warning: addVar: unknown size (s)
--60440-- Scheduler: using generic scheduler lock implementation.
--60440-- Reading suppressions file: /usr/lib/valgrind/default.supp
==60440== embedded gdbserver: reading from /tmp/vgdb-pipe-from-vgdb-to-60440-by-dbuch-on-???
==60440== embedded gdbserver: writing to /tmp/vgdb-pipe-to-vgdb-from-60440-by-dbuch-on-???
==60440== embedded gdbserver: shared mem /tmp/vgdb-pipe-shared-mem-vgdb-60440-by-dbuch-on-???
==60440==
==60440== TO CONTROL THIS PROCESS USING vgdb (which you probably
==60440== don't want to do, unless you know exactly what you're doing,
==60440== or are doing some strange experiment):
==60440== /usr/lib/valgrind/../../bin/vgdb --pid=60440 ...command...
==60440==
==60440== TO DEBUG THIS PROCESS USING GDB: start GDB like this
==60440== /path/to/gdb ./target/debug/smol-playground
==60440== and then give GDB the following command
==60440== target remote | /usr/lib/valgrind/../../bin/vgdb --pid=60440
==60440== --pid is optional if only one valgrind process is running
==60440==
--60440-- REDIR: 0x40212b0 (ld-linux-x86-64.so.2:strlen) redirected to 0x580c7532 (vgPlain_amd64_linux_REDIR_FOR_strlen)
--60440-- REDIR: 0x4021080 (ld-linux-x86-64.so.2:index) redirected to 0x580c754c (vgPlain_amd64_linux_REDIR_FOR_index)
--60440-- Reading syms from /usr/lib/valgrind/vgpreload_core-amd64-linux.so
--60440-- Reading syms from /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so
==60440== WARNING: new redirection conflicts with existing -- ignoring it
--60440-- old: 0x040212b0 (strlen ) R-> (0000.0) 0x580c7532 vgPlain_amd64_linux_REDIR_FOR_strlen
--60440-- new: 0x040212b0 (strlen ) R-> (2007.0) 0x0483cda0 strlen
--60440-- REDIR: 0x401da90 (ld-linux-x86-64.so.2:strcmp) redirected to 0x483dc90 (strcmp)
--60440-- REDIR: 0x4021810 (ld-linux-x86-64.so.2:mempcpy) redirected to 0x4841670 (mempcpy)
--60440-- Reading syms from /usr/lib/libdl-2.31.so
--60440-- object doesn't have a symbol table
--60440-- Reading syms from /usr/lib/librt-2.31.so
--60440-- object doesn't have a symbol table
--60440-- Reading syms from /usr/lib/libpthread-2.31.so
--60440-- Reading syms from /usr/lib/libgcc_s.so.1
--60440-- Reading syms from /usr/lib/libc-2.31.so
--60440-- REDIR: 0x495cc40 (libc.so.6:memmove) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495bfc0 (libc.so.6:strncpy) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495cf70 (libc.so.6:strcasecmp) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495b8e0 (libc.so.6:strcat) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495c020 (libc.so.6:rindex) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495e310 (libc.so.6:rawmemchr) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x49767f0 (libc.so.6:wmemchr) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x4976330 (libc.so.6:wcscmp) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495cda0 (libc.so.6:mempcpy) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495cbd0 (libc.so.6:bcmp) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495bf50 (libc.so.6:strncmp) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495b990 (libc.so.6:strcmp) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495cd00 (libc.so.6:memset) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x49762f0 (libc.so.6:wcschr) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495beb0 (libc.so.6:strnlen) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495ba70 (libc.so.6:strcspn) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495cfc0 (libc.so.6:strncasecmp) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495ba10 (libc.so.6:strcpy) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495d110 (libc.so.6:memcpy@@GLIBC_2.14) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x4977a40 (libc.so.6:wcsnlen) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x4976370 (libc.so.6:wcscpy) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495c060 (libc.so.6:strpbrk) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495b940 (libc.so.6:index) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495be70 (libc.so.6:strlen) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x4962750 (libc.so.6:memrchr) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495d010 (libc.so.6:strcasecmp_l) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495cb90 (libc.so.6:memchr) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x4976440 (libc.so.6:wcslen) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495c320 (libc.so.6:strspn) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495cf10 (libc.so.6:stpncpy) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495ceb0 (libc.so.6:stpcpy) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495e350 (libc.so.6:strchrnul) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x495d060 (libc.so.6:strncasecmp_l) redirected to 0x482f1c0 (_vgnU_ifunc_wrapper)
--60440-- REDIR: 0x4a2e000 (libc.so.6:__strrchr_avx2) redirected to 0x483c7b0 (rindex)
--60440-- REDIR: 0x4957e00 (libc.so.6:malloc) redirected to 0x4839710 (malloc)
--60440-- REDIR: 0x495c7d0 (libc.so.6:__GI_strstr) redirected to 0x48418c0 (__strstr_sse2)
--60440-- REDIR: 0x4a2a030 (libc.so.6:__memchr_avx2) redirected to 0x483dd10 (memchr)
--60440-- REDIR: 0x49586b0 (libc.so.6:realloc) redirected to 0x483bd00 (realloc)
--60440-- REDIR: 0x4a311e0 (libc.so.6:__memcpy_avx_unaligned_erms) redirected to 0x4840690 (memmove)
--60440-- REDIR: 0x4a2a300 (libc.so.6:__rawmemchr_avx2) redirected to 0x4841210 (rawmemchr)
--60440-- REDIR: 0x4958440 (libc.so.6:free) redirected to 0x483a940 (free)
--60440-- REDIR: 0x4a31660 (libc.so.6:__memset_avx2_unaligned_erms) redirected to 0x4840580 (memset)
--60440-- REDIR: 0x4958b80 (libc.so.6:calloc) redirected to 0x483bab0 (calloc)
--60440-- REDIR: 0x49598c0 (libc.so.6:posix_memalign) redirected to 0x483bfa0 (posix_memalign)
==60440==
==60440== FILE DESCRIPTORS: 7 open at exit.
==60440== Open AF_UNIX socket 8: <unknown>
==60440== at 0x49CDA7E: socketpair (in /usr/lib/libc-2.31.so)
==60440== by 0x17396B: socket2::sys::Socket::pair (unix.rs:202)
==60440== by 0x172F08: socket2::socket::Socket::pair (socket.rs:82)
==60440== by 0x12B13C: smol::pipe (lib.rs:1873)
==60440== by 0x12A740: smol::SelfPipe::new (lib.rs:1818)
==60440== by 0x12A420: smol::IoEvent::new (lib.rs:1786)
==60440== by 0x13548D: smol::WorkStealingExecutor::get::EXECUTOR::{{closure}} (lib.rs:801)
==60440== by 0x157734: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x157803: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x137235: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:910)
==60440== by 0x137466: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:763)
==60440== by 0x15F84C: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:96)
==60440==
==60440== Open AF_UNIX socket 7: <unknown>
==60440== at 0x49CDA7E: socketpair (in /usr/lib/libc-2.31.so)
==60440== by 0x17396B: socket2::sys::Socket::pair (unix.rs:202)
==60440== by 0x172F08: socket2::socket::Socket::pair (socket.rs:82)
==60440== by 0x12B13C: smol::pipe (lib.rs:1873)
==60440== by 0x12A740: smol::SelfPipe::new (lib.rs:1818)
==60440== by 0x12A420: smol::IoEvent::new (lib.rs:1786)
==60440== by 0x13548D: smol::WorkStealingExecutor::get::EXECUTOR::{{closure}} (lib.rs:801)
==60440== by 0x157734: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x157803: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x137235: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:910)
==60440== by 0x137466: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:763)
==60440== by 0x15F84C: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:96)
==60440==
==60440== Open file descriptor 6:
==60440== at 0x49CCD6B: epoll_create1 (in /usr/lib/libc-2.31.so)
==60440== by 0x13C76F: nix::sys::epoll::epoll_create1 (epoll.rs:78)
==60440== by 0x137D5F: smol::sys::Reactor::new (lib.rs:2289)
==60440== by 0x135E35: smol::Reactor::get::REACTOR::{{closure}} (lib.rs:1088)
==60440== by 0x157604: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x1576E3: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x137005: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:910)
==60440== by 0x1373FB: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:763)
==60440== by 0x15F561: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:96)
==60440== by 0x1759AA: once_cell::imp::initialize_inner (imp_std.rs:133)
==60440== by 0x15F4BD: once_cell::imp::OnceCell<T>::initialize (imp_std.rs:94)
==60440== by 0x137743: once_cell::sync::OnceCell<T>::get_or_try_init (lib.rs:803)
==60440==
==60440== Open file descriptor 3: /home/dbuch/dev/rust/smol-playground/valgrind-output.txt
==60440== <inherited from parent>
==60440==
==60440== Open file descriptor 2: /dev/pts/4
==60440== <inherited from parent>
==60440==
==60440== Open file descriptor 1: /dev/pts/4
==60440== <inherited from parent>
==60440==
==60440== Open file descriptor 0: /dev/pts/4
==60440== <inherited from parent>
==60440==
==60440==
==60440== HEAP SUMMARY:
==60440== in use at exit: 14,908 bytes in 18 blocks
==60440== total heap usage: 39 allocs, 21 frees, 18,529 bytes allocated
==60440==
==60440== Searching for pointers to 18 not-freed blocks
==60440== Checked 128,784 bytes
==60440==
==60440== 4 bytes in 1 blocks are still reachable in loss record 1 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13CAAB: alloc::alloc::exchange_malloc (alloc.rs:203)
==60440== by 0x136529: smol::Async<T>::new (boxed.rs:174)
==60440== by 0x12AAFB: smol::SelfPipe::new (lib.rs:1824)
==60440== by 0x12A420: smol::IoEvent::new (lib.rs:1786)
==60440== by 0x13548D: smol::WorkStealingExecutor::get::EXECUTOR::{{closure}} (lib.rs:801)
==60440== by 0x157734: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x157803: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x137235: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:910)
==60440== by 0x137466: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:763)
==60440== by 0x15F84C: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:96)
==60440==
==60440== 24 bytes in 1 blocks are still reachable in loss record 2 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13C911: <alloc::alloc::Global as core::alloc::Alloc>::alloc (alloc.rs:169)
==60440== by 0x152706: alloc::raw_vec::RawVec<T,A>::reserve_internal (raw_vec.rs:661)
==60440== by 0x153E19: alloc::raw_vec::RawVec<T,A>::reserve (raw_vec.rs:485)
==60440== by 0x155839: alloc::vec::Vec<T>::reserve (vec.rs:493)
==60440== by 0x15510C: alloc::vec::Vec<T>::push (vec.rs:1166)
==60440== by 0x12CB09: slab::Slab<T>::insert_at (lib.rs:632)
==60440== by 0x12CF17: slab::VacantEntry<T>::insert (lib.rs:891)
==60440== by 0x12809A: smol::WorkStealingExecutor::worker (lib.rs:846)
==60440== by 0x120F94: smol::run (lib.rs:503)
==60440== by 0x1252C9: smol_playground::main (main.rs:4)
==60440==
==60440== 32 bytes in 1 blocks are still reachable in loss record 3 of 11
==60440== at 0x483BD7B: realloc (vg_replace_malloc.c:836)
==60440== by 0x13CC0C: alloc::alloc::realloc (alloc.rs:125)
==60440== by 0x13C9D5: <alloc::alloc::Global as core::alloc::Alloc>::realloc (alloc.rs:184)
==60440== by 0x1520C2: alloc::raw_vec::RawVec<T,A>::reserve_internal (raw_vec.rs:659)
==60440== by 0x153C09: alloc::raw_vec::RawVec<T,A>::reserve (raw_vec.rs:485)
==60440== by 0x155869: alloc::vec::Vec<T>::reserve (vec.rs:493)
==60440== by 0x15526C: alloc::vec::Vec<T>::push (vec.rs:1166)
==60440== by 0x12CD5A: slab::Slab<T>::insert_at (lib.rs:632)
==60440== by 0x12CFA7: slab::VacantEntry<T>::insert (lib.rs:891)
==60440== by 0x128EF2: smol::Reactor::register (lib.rs:1115)
==60440== by 0x136417: smol::Async<T>::new (lib.rs:1331)
==60440== by 0x12AAFB: smol::SelfPipe::new (lib.rs:1824)
==60440==
==60440== 40 bytes in 1 blocks are still reachable in loss record 4 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13CAAB: alloc::alloc::exchange_malloc (alloc.rs:203)
==60440== by 0x15B8E7: alloc::sync::Arc<T>::new (sync.rs:302)
==60440== by 0x12A536: smol::IoEvent::new (lib.rs:1786)
==60440== by 0x13548D: smol::WorkStealingExecutor::get::EXECUTOR::{{closure}} (lib.rs:801)
==60440== by 0x157734: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x157803: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x137235: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:910)
==60440== by 0x137466: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:763)
==60440== by 0x15F84C: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:96)
==60440== by 0x1759AA: once_cell::imp::initialize_inner (imp_std.rs:133)
==60440==
==60440== 40 bytes in 1 blocks are possibly lost in loss record 5 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x16CB6B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x16CADB: alloc::alloc::exchange_malloc (alloc.rs:203)
==60440== by 0x16B94D: std::sync::mutex::Mutex<T>::new (mutex.rs:168)
==60440== by 0x169FFB: piper::signal::Signal::inner (signal.rs:139)
==60440== by 0x166983: piper::signal::Signal::notify_one (signal.rs:104)
==60440== by 0x15ADEA: <piper::lock::LockGuard<T> as core::ops::drop::Drop>::drop (lock.rs:265)
==60440== by 0x157F8E: core::ptr::drop_in_place (mod.rs:174)
==60440== by 0x128F39: smol::Reactor::register (lib.rs:1116)
==60440== by 0x136417: smol::Async<T>::new (lib.rs:1331)
==60440== by 0x12AAFB: smol::SelfPipe::new (lib.rs:1824)
==60440== by 0x12A420: smol::IoEvent::new (lib.rs:1786)
==60440==
==60440== 72 bytes in 1 blocks are possibly lost in loss record 6 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x16CB6B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x16CADB: alloc::alloc::exchange_malloc (alloc.rs:203)
==60440== by 0x1695DD: alloc::sync::Arc<T>::new (sync.rs:302)
==60440== by 0x16A03F: piper::signal::Signal::inner (signal.rs:137)
==60440== by 0x166983: piper::signal::Signal::notify_one (signal.rs:104)
==60440== by 0x15ADEA: <piper::lock::LockGuard<T> as core::ops::drop::Drop>::drop (lock.rs:265)
==60440== by 0x157F8E: core::ptr::drop_in_place (mod.rs:174)
==60440== by 0x128F39: smol::Reactor::register (lib.rs:1116)
==60440== by 0x136417: smol::Async<T>::new (lib.rs:1331)
==60440== by 0x12AAFB: smol::SelfPipe::new (lib.rs:1824)
==60440== by 0x12A420: smol::IoEvent::new (lib.rs:1786)
==60440==
==60440== 80 bytes in 1 blocks are still reachable in loss record 7 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13CAAB: alloc::alloc::exchange_malloc (alloc.rs:203)
==60440== by 0x15B57D: alloc::sync::Arc<T>::new (sync.rs:302)
==60440== by 0x128D44: smol::Reactor::register (lib.rs:1107)
==60440== by 0x136417: smol::Async<T>::new (lib.rs:1331)
==60440== by 0x12AAFB: smol::SelfPipe::new (lib.rs:1824)
==60440== by 0x12A420: smol::IoEvent::new (lib.rs:1786)
==60440== by 0x13548D: smol::WorkStealingExecutor::get::EXECUTOR::{{closure}} (lib.rs:801)
==60440== by 0x157734: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x157803: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x137235: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:910)
==60440==
==60440== 576 bytes in 8 blocks are still reachable in loss record 8 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13CAAB: alloc::alloc::exchange_malloc (alloc.rs:203)
==60440== by 0x165A5C: std::sync::rwlock::RwLock<T>::new (rwlock.rs:134)
==60440== by 0x156B8B: crossbeam_utils::sync::sharded_lock::ShardedLock<T>::new::{{closure}} (sharded_lock.rs:104)
==60440== by 0x1423D0: core::iter::adapters::map_fold::{{closure}} (mod.rs:772)
==60440== by 0x140DA8: core::iter::traits::iterator::Iterator::fold::ok::{{closure}} (iterator.rs:1900)
==60440== by 0x12D51A: core::iter::traits::iterator::Iterator::try_fold (iterator.rs:1776)
==60440== by 0x12D42E: core::iter::traits::iterator::Iterator::fold (iterator.rs:1903)
==60440== by 0x142A01: <core::iter::adapters::Map<I,F> as core::iter::traits::iterator::Iterator>::fold (mod.rs:812)
==60440== by 0x141F54: core::iter::traits::iterator::Iterator::for_each (iterator.rs:655)
==60440== by 0x1560B7: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T,I>>::spec_extend (vec.rs:2054)
==60440==
==60440== 1,016 bytes in 1 blocks are still reachable in loss record 9 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13CAAB: alloc::alloc::exchange_malloc (alloc.rs:203)
==60440== by 0x148B8B: crossbeam_deque::Injector<T>::new (boxed.rs:174)
==60440== by 0x135431: smol::WorkStealingExecutor::get::EXECUTOR::{{closure}} (lib.rs:799)
==60440== by 0x157734: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x157803: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x137235: once_cell::sync::Lazy<T,F>::force::{{closure}} (lib.rs:910)
==60440== by 0x137466: once_cell::sync::OnceCell<T>::get_or_init::{{closure}} (lib.rs:763)
==60440== by 0x15F84C: once_cell::imp::OnceCell<T>::initialize::{{closure}} (imp_std.rs:96)
==60440== by 0x1759AA: once_cell::imp::initialize_inner (imp_std.rs:133)
==60440== by 0x15F3A2: once_cell::imp::OnceCell<T>::initialize (imp_std.rs:94)
==60440==
==60440== 1,024 bytes in 1 blocks are still reachable in loss record 10 of 11
==60440== at 0x483BEB8: memalign (vg_replace_malloc.c:908)
==60440== by 0x483BFCE: posix_memalign (vg_replace_malloc.c:1072)
==60440== by 0x1AE7A9: __rdl_alloc (alloc.rs:85)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13C911: <alloc::alloc::Global as core::alloc::Alloc>::alloc (alloc.rs:169)
==60440== by 0x152DF6: alloc::raw_vec::RawVec<T,A>::reserve_internal (raw_vec.rs:661)
==60440== by 0x153CB9: alloc::raw_vec::RawVec<T,A>::reserve (raw_vec.rs:485)
==60440== by 0x1557D9: alloc::vec::Vec<T>::reserve (vec.rs:493)
==60440== by 0x155FCC: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T,I>>::spec_extend (vec.rs:2050)
==60440== by 0x1561F0: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T,I>>::from_iter (vec.rs:2034)
==60440== by 0x1567DC: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter (vec.rs:1919)
==60440== by 0x141E74: core::iter::traits::iterator::Iterator::collect (iterator.rs:1558)
==60440==
==60440== 12,000 bytes in 1 blocks are still reachable in loss record 11 of 11
==60440== at 0x483977F: malloc (vg_replace_malloc.c:309)
==60440== by 0x13CB3B: alloc::alloc::alloc (alloc.rs:81)
==60440== by 0x13C911: <alloc::alloc::Global as core::alloc::Alloc>::alloc (alloc.rs:169)
==60440== by 0x14FF0D: alloc::raw_vec::RawVec<T,A>::allocate_in (raw_vec.rs:88)
==60440== by 0x14F855: alloc::raw_vec::RawVec<T>::with_capacity (raw_vec.rs:140)
==60440== by 0x1548D3: alloc::vec::Vec<T>::with_capacity (vec.rs:355)
==60440== by 0x161F4D: <T as alloc::vec::SpecFromElem>::from_elem (vec.rs:1733)
==60440== by 0x155981: alloc::vec::from_elem (vec.rs:1723)
==60440== by 0x138203: smol::sys::Events::new (lib.rs:2320)
==60440== by 0x135E97: smol::Reactor::get::REACTOR::{{closure}} (lib.rs:1090)
==60440== by 0x157604: core::ops::function::FnOnce::call_once (function.rs:232)
==60440== by 0x1576E3: core::ops::function::FnOnce::call_once (function.rs:232)
==60440==
==60440== LEAK SUMMARY:
==60440== definitely lost: 0 bytes in 0 blocks
==60440== indirectly lost: 0 bytes in 0 blocks
==60440== possibly lost: 112 bytes in 2 blocks
==60440== still reachable: 14,796 bytes in 16 blocks
==60440== suppressed: 0 bytes in 0 blocks
==60440==
==60440== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
Currently, bind
and connect
use a ToString
trait bound to be generic over String
and &str
and convert it to a set of addresses internally. This API allows many irrelevant types to be passed (impl<T: Display> ToString for T
), and doesn't do anything to make the expected input clear to users.
It caters for the simple use cases where the addresses are specified at compile time, and forces everyone else to generate a string that will behave properly. This opens the door to a set of exploits in the code generating that string.
For now, I think the API should simply accept a SocketAddr
as an argument. The implementations don't currently support multiple addresses anyway, and we would want any generic solution in the future to also support these inputs,
I'd love to have a helper function smol::start_threads
that starts one thread per CPU, and either returns nothing, or returns a token that can be handed to a smol::stop_threads
function to tell those threads to stop and then wait for them.
Alternatively, if it's simpler, I'd be happy with a function that takes a future and starts a thread pool before calling block_on
on it, stopping the thread pool when the future finishes.
As discussed at https://www.reddit.com/r/rust/comments/g917ad/smol_stjepang_a_small_and_fast_async_runtime_for/fortim2/
Why do you suggest to not use Async<File>
? I'm actually using it for /dev/ptmx
(https://github.com/oblique/sktsh).
Should I wrap it to another struct and use libc::read
/libc::write
directly?
I'd expect this incorrect program to never wake up, never return and underneath stream::poll_fn to be polled once.
When running with smol 0.1.5 it triggers tight infinite polling loop.
use smol::{self, Timer};
use std::task::Poll;
use std::time::Duration;
use futures::future::FutureExt;
use futures::stream::{self, StreamExt};
pub fn main() {
smol::run(async {
let mut sd = stream::poll_fn(|cx| {
println!("I am being polled");
Timer::after(Duration::from_secs(1)).poll_unpin(cx);
Poll::<Option<()>>::Pending
});
sd.next().await;
})
}
For comparison, here is Playground with tokio: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=489752594073ad372fee0a65f0aeb7ad
error[E0432]: unresolved import `wepoll::EventFlag`
--> C:\dev\stjepang\smol\src\lib.rs:1193:21
|
1193 | use wepoll::EventFlag::*;
| ^^^^^^^^^ `EventFlag` is a struct, not a module
error[E0425]: cannot find value `ONESHOT` in this scope
--> C:\dev\stjepang\smol\src\lib.rs:1194:9
|
1194 | ONESHOT | IN | OUT | RDHUP
| ^^^^^^^ not found in this scope
error[E0425]: cannot find value `IN` in this scope
--> C:\dev\stjepang\smol\src\lib.rs:1194:19
|
1194 | ONESHOT | IN | OUT | RDHUP
| ^^ not found in this scope
error[E0425]: cannot find value `OUT` in this scope
--> C:\dev\stjepang\smol\src\lib.rs:1194:24
|
1194 | ONESHOT | IN | OUT | RDHUP
| ^^^ not found in this scope
error[E0425]: cannot find value `RDHUP` in this scope
--> C:\dev\stjepang\smol\src\lib.rs:1194:30
|
1194 | ONESHOT | IN | OUT | RDHUP
| ^^^^^ not found in this scope
warning: unused import: `std::path::Path`
--> C:\dev\stjepang\smol\src\lib.rs:15:5
|
15 | use std::path::Path;
| ^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
warning: unused import: `std::convert::TryInto`
--> C:\dev\stjepang\smol\src\lib.rs:1153:9
|
1153 | use std::convert::TryInto;
| ^^^^^^^^^^^^^^^^^^^^^
warning: unreachable expression
--> C:\dev\stjepang\smol\src\lib.rs:543:5
|
542 | todo!();
| -------- any code following this expression is unreachable
543 | stream::empty()
| ^^^^^^^^^^^^^^^ unreachable expression
|
= note: `#[warn(unreachable_code)]` on by default
warning: unreachable expression
--> C:\dev\stjepang\smol\src\lib.rs:550:5
|
549 | todo!();
| -------- any code following this expression is unreachable
550 | futures_util::io::empty()
| ^^^^^^^^^^^^^^^^^^^^^^^^^ unreachable expression
warning: unreachable expression
--> C:\dev\stjepang\smol\src\lib.rs:558:5
|
557 | todo!();
| -------- any code following this expression is unreachable
558 | futures_util::io::sink()
| ^^^^^^^^^^^^^^^^^^^^^^^^ unreachable expression
error[E0308]: mismatched types
--> C:\dev\stjepang\smol\src\lib.rs:1179:29
|
1179 | self.0.register(source, flags(), index as u64)
| ^^^^^^
| |
| expected reference, found struct `sys::RawSource`
| help: consider borrowing here: `&source`
|
= note: expected reference `&_`
found struct `sys::RawSource`
error[E0308]: mismatched types
--> C:\dev\stjepang\smol\src\lib.rs:1182:31
|
1182 | self.0.reregister(source, flags(), index as u64)?
| ^^^^^^
| |
| expected reference, found struct `sys::RawSource`
| help: consider borrowing here: `&source`
|
= note: expected reference `&_`
found struct `sys::RawSource`
error[E0308]: try expression alternatives have incompatible types
--> C:\dev\stjepang\smol\src\lib.rs:1182:13
|
1182 | self.0.reregister(source, flags(), index as u64)?
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected enum `std::result::Result`, found `()`
|
= note: expected enum `std::result::Result<(), std::io::error::Error>`
found unit type `()`
help: try removing this `?`
|
1182 | self.0.reregister(source, flags(), index as u64)
| --
help: try using a variant of the expected enum
|
1182 | Ok(self.0.reregister(source, flags(), index as u64)?)
|
error[E0308]: mismatched types
--> C:\dev\stjepang\smol\src\lib.rs:1185:31
|
1185 | self.0.deregister(source)
| ^^^^^^
| |
| expected reference, found struct `sys::RawSource`
| help: consider borrowing here: `&source`
|
= note: expected reference `&_`
found struct `sys::RawSource`
error[E0308]: mismatched types
--> C:\dev\stjepang\smol\src\lib.rs:1189:25
|
1189 | self.0.poll(events, timeout)
| ^^^^^^ expected struct `wepoll_binding::Events`, found struct `sys::Events`
|
= note: expected mutable reference `&mut wepoll_binding::Events`
found mutable reference `&mut sys::Events`
error: aborting due to 10 previous errors
I managed to run warp
in smol
, quite easily:
use warp::Filter;
fn main() {
for _ in 0..num_cpus::get() {
std::thread::spawn(|| smol::run(futures::future::pending::<()>()));
}
smol::block_on(async {
let hello = warp::path!("hello" / String)
.map(|name| format!("Hello, {}!", name));
warp::serve(hello)
.run(([127, 0, 0, 1], 8080))
.await;
})
}
Please consider adding this as a demo.
Currently we don't cache anything, and that's very wasteful.
We should follow actions-rs/meta#21 closely for the availability of better caching support in the Github Actions helpers we use.
And we are lucky to have the maintainer of these helpers as a committer already @svartalf!
Would like to be able to start and asynchronously wait for it to complete.
Basically looking for an alternative to Tokio process.
Does writer
and reader
play any real role in reading and writing communication here?
I find that writer write one byte in notify
and reader read bytes in clears
. But it seems that
the flag
value does not depend on these bytes, it's just depend on some atomic ops.
As pointed out here, we should not advise users to import the certificate into their browsers. It would be much better to generate them on their own.
Suggestions on what we should do here more concretely? I wonder if it'd be okay to keep the certificate for examples like tls-client.rs and tls-server.rs, but advise not using them for any HTTPS example.
Should we document it:
If we want to run a single-thread executor, just use
smol::run(async { ... })
If we want to run multi-thread executor, the better usage is
thread::spawn(smol::run(futures::future::pending::<()>()));
thread::spawn(smol::run(futures::future::pending::<()>()));
thread::spawn(smol::run(futures::future::pending::<()>()));
// write n times if you want n threads in executor
smol::block_on(async { ... })
I would like to be able to use smol in software that already uses other runtimes. By that, I don't mean tokio or async-std; I mean completely different runtimes (usually called "I/O loops", "event loops" or "run loops") usually provided by an OS or by a toolkit. A few examples of such runtimes:
CFRunLoop
I'm not suggesting smol should add explicit support for each and every external runtime; I'm thinking of a more generic scheme, akin to what the Wayland event loop does (see wl_event_loop_get_fd()
/ wl_event_loop_dispatch()
/ wl_event_loop_dispatch_idle()
).
The general idea is I would want the external runtime, rather than smol's reactor, to do the waiting on I/O part. Then, once some I/O can be performed on one of the file descriptors registered with smol, the external runtime would call back into smol, letting it handle the event.
This basically means smol should provide the API to get its epoll/kqueue fd (to be registered with the external event loop), and to "dispatch" events when the fd becomes readable (so, run its reactor & executor until it has nothing to do, then return instead of blocking).
I'm assuming a browser (javascript) environment. Quick thoughts:
Async
will be unusable because AsRawFd
/AsRawSocket
types don't exist on wasm.Timer
can be perhaps be implemented by calling setTimeout()
in javascript.Hi @pfmooney, hope you don't mind this mention - thought you might be interested! I have added support for illumos in this runtime by using epoll
the same way it is used on Linux/Android.
I think the only missing piece right now is support for illumos in the nix
crate - as soon as that step is done, I believe smol
should just work on illumos. Do you perhaps know if there are any plans for adding illumos support to nix
?
I planning to migrate a small library project that I have to smol. Before I start the migration I wanted to see how crates written with smol can be used in projects written with tokio. So I wrote the following example:
use futures::prelude::*;
use smol::{Async, Task};
use std::net::TcpStream;
async fn foo() {
Task::spawn(async {
let mut buf = [0u8; 1024];
let mut s = Async::<TcpStream>::connect("127.0.0.1:4444").await.unwrap();
s.write_all(b"write something\n").await.unwrap();
let len = s.read(&mut buf).await.unwrap();
println!("{}", std::str::from_utf8(&buf[..len]).unwrap());
})
.await
}
#[tokio::main]
async fn main() {
// start smol runtime in a thread
std::thread::spawn(|| smol::run(future::pending::<()>()));
foo().await;
}
I notice that without std::thread::spawn(|| smol::run(future::pending::<()>()))
my example doesn't work.
My question now is: Should I require from users of my crate to have at least one instance of smol::run
? What if smol spawns a new thread automatically if is no smol::run
instance exists?
One thing that has consistently confused me about futures is this quote from the docs:
https://doc.rust-lang.org/std/future/trait.Future.html
Once a future has finished, clients should not poll it again.
When a future is not ready yet, poll returns Poll::Pending and stores a clone of the Waker copied from the current Context. This Waker is then woken once the future can make progress. For example, a future waiting for a socket to become readable would call .clone() on the Waker and store it. When a signal arrives elsewhere indicating that the socket is readable, Waker::wake is called and the socket future's task is awoken. Once a task has been woken up, it should attempt to poll the future again, which may or may not produce a final value.
Note that on multiple calls to poll, only the Waker from the Context passed to the most recent call should be scheduled to receive a wakeup.
To me, its clear why a PROPER future implementation should use the waker to unblock the executor (some combinators like Map don't do this, but they usually contain futures that do use the waker), but its also clear that there is nothing really stopping someone from writing a naive future that just does cpu bound work.
To test this I made https://github.com/guswynn/smol-test-block, which implements a future that yields some amount of times, then yields Ready()
after that many times, and NEVER interacts with the _cx
, to see if the the executor would be blocked.
> cargo run 10
Compiling smol_test v0.1.0 (/Users/azw/ruststuff/smol_test)
Finished dev [unoptimized + debuginfo] target(s) in 0.87s
Running `target/debug/smol_test 10`
Current countdown: 9
Current countdown: 8
<hang>
I originally thought I'd have to countdown 200 times (as I believe that thats the number of poll's the 2 executors run on a loop, but in fact, it seemed to get stuck after only 2 poll's
It feels bad that what seems like an innocent future implementation, can hang an executor. Is something wrong with my code here? Am I totally off base thinking that futures that don't use the waker should work? To me its weird in rust for a constraint like this to not be statically enforced.
For posterity, the tokio-test branch of that repo tries the same thing, but tokio seemed to have similar behavior:
> cargo run 10000
Compiling smol_test v0.1.0 (/Users/azw/ruststuff/smol_test)
Finished dev [unoptimized + debuginfo] target(s) in 0.78s
Running `target/debug/smol_test 10000`
Current countdown: 9999
<hang>
Hello,
First, this looks like an awesome project! Being able to do that much with that little code, and having the ability to seamlessly spawn Send
, !Send
and blocking tasks looks great.
That being said, I came to smol with, as my main approach, “should a library choose to depend on smol rather than on tokio/async-std” (the best choice IMO being to depend on none, but that may or may not be reasonable depending on the API, too many generic arguments kill everything and taking a trait for every interaction with the outside world can be cumbersome to use… plus no one has written traits that could be used by all libraries and implemented by all executors yet, though I plan on doing that someday in the next twenty years if no one does it before).
And… it looks like (right now) Reactor
isn't exposed. Which basically means that a library that chooses to depend on smol
won't be usable from other executors without the application spawning a full smol
, complete with its three executors. While it probably wouldn't cause any noticeable performance penalty, it still feels unclean.
Hence, I believe it might be helpful to expose the Reactor
to the outside world, so that libraries could choose to depend on smol
's Reactor
and especially the Async
struct -- like tokio
once did.
Now, there are two ways to go for this: either just make it pub
, or make it a completely separate crate. The advantage I would see in making it a completely separate crate would be that then, it becomes easier to port smol
to new OSes: it'd just consist in forking the crate, and the user could then simply use the forked reactor, while still using upstream smol
. Not sure there's a difference that's not purely theoretical, though the second one might appear a tad bit cleaner.
That'd basically mean splitting smol
in two: the Reactor
and Async
struct, and the executor series and Task
struct. Each would then (hopefully) be usable independently. (While still keeping, gated under a feature that's in the default set, the run
function using smol
's current Reactor
for people who just want the easy thing)
What do you think about this idea? Am I missing a reason why that'd not be possible and/or a bad idea?
(also, if you're aiming for feature-parity, I think task_local
s are something that can't currently be implemented in a reasonable fashion without support from the executor, so it may make sense to add support for them, but I'm not sure what level of feature-parity you're trying to achieve right now, so won't open an issue straight away -- I can do if you want)
Anyway, thank you for smol, which looks like a really impressive piece of software!
There isn't any good library for async pty that we can consume directly from crates. Would be good if there was features=['pty']
.
Few ones that I known exist are:
Hey there, I just found this project and I also happened to follow your GitHub user page to your blog and find the post about it as well. I was thinking that it would be great to link to that post from the README because it gives some great context on "yet another executor" and helped me realize why I would be interested in using it and what the motivation for its design was.
Right now, the reported coverage on the badge in readme is 31%. Let's get it to 100%!
async-std starts switches to smol since 1.6.beta-1, I then did a upgrade for https://github.com/casbin/casbin-rs, you can find my work: casbin/casbin-rs#136
However after this change, I discovered a big benchmark regression using the code from: https://github.com/casbin/casbin-rs/tree/master/benches
Any idea on how to fix it? In PR: casbin/casbin-rs#136 I only added many feature gates, this shouldn't influence bench. The only important change is upgrading async-std to the version use smol
.
The bench numbers are here:
name before ns/iter after ns/iter diff ns/iter diff % speedup
b_benchmark_abac_model 7,578 30,203 22,625 298.56% x 0.25
b_benchmark_basic_model 7,815 31,767 23,952 306.49% x 0.25
b_benchmark_cached_abac_model 466 12,451 11,985 2571.89% x 0.04
b_benchmark_cached_key_match 463 13,334 12,871 2779.91% x 0.03
b_benchmark_cached_priority_model 457 12,132 11,675 2554.70% x 0.04
b_benchmark_cached_rbac_model 458 11,731 11,273 2461.35% x 0.04
b_benchmark_cached_rbac_model_large 450 13,266 12,816 2848.00% x 0.03
b_benchmark_cached_rbac_model_medium 445 13,409 12,964 2913.26% x 0.03
b_benchmark_cached_rbac_model_small 448 13,270 12,822 2862.05% x 0.03
b_benchmark_cached_rbac_model_with_domains 520 12,127 11,607 2232.12% x 0.04
b_benchmark_cached_rbac_with_deny 458 13,246 12,788 2792.14% x 0.03
b_benchmark_cached_rbac_with_resource_roles 458 11,389 10,931 2386.68% x 0.04
b_benchmark_key_match 24,955 63,335 38,380 153.80% x 0.39
b_benchmark_priority_model 9,261 32,093 22,832 246.54% x 0.29
b_benchmark_raw 6 6 0 0.00% x 1.00
b_benchmark_rbac_model 21,778 50,993 29,215 134.15% x 0.43
b_benchmark_rbac_model_large 64,688,371 64,885,812 197,441 0.31% x 1.00
b_benchmark_rbac_model_medium 6,209,387 6,290,390 81,003 1.30% x 0.99
b_benchmark_rbac_model_small 620,889 651,250 30,361 4.89% x 0.95
b_benchmark_rbac_model_with_domains 12,608 37,937 25,329 200.90% x 0.33
b_benchmark_rbac_with_deny 37,278 74,414 37,136 99.62% x 0.50
b_benchmark_rbac_with_resource_roles 9,920 32,489 22,569 227.51% x 0.31
b_benmark_cached_basic_model 458 11,746 11,288 2464.63% x 0.04
bench code:
https://github.com/casbin/casbin-rs
Running /target/powerpc64-unknown-linux-gnu/debug/deps/smol-4374c813ff2504f9
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
https://github.com/stjepang/smol/pull/40/checks?check_run_id=624277323#step:6:43
It's the same for every target platform.
Does the cross setup also includes qemu for usermode binary translation?
It's either that or we try to get runners for every architecture, which is feasible, I have a machine with an IBM POWER CPU at home on which I can run VMs of both big endian and little endian at the same time.
Travis also has powerpc64le runners, see: https://docs.travis-ci.com/user/reference/overview/#virtualisation-environment-vs-operating-system
Github Actions does not have anything else than x86 for now. We might want to go hybrid with CI and start testing on arm and ppc64le natively with Travis.
I don't know if you'd find this out of scope for smol, but I'd love to be able to register a resource in the reactor, and just get notified when it becomes readable/writable/closed/error.
Would you be ok with having something like:
Source::readable(Context) -> bool
which would return false if not readable + notify the waker once it isSource::set_read_wouldblock(Context)
to tell smol that we performed a write operation of this source without using poll_io, hit wouldblock and would like to get notified when it gets readable againIf this is acceptable, I can try and implement it
use smol::Task;
fn main() {
smol::run(async {
Task::local(async {
print!("helloworld");
})
.detach();
});
}
Nothing happened.
I expect it will get the helloworld
or tell me it's not allowed.
We should explain in the documentation that a benchmark in debug mode would be very generic, compared to tokio . But in release mode it is not much different.
I’m working on a plugin-based system where I’m dynamically loading .so depending on some runtime-configuration. In those shared objects I’m running an init function which might spawn an async Task, i.e.:
async fn init {
println!(“{:?}”, std::thread::current());
Task::local(async {
// some more code lives here.
}).detach();
}
.
This however fails with the following error:
thread '<unnamed>' panicked at 'cannot spawn a thread-local task if not inside an executor'
I think the interesting bit of info here is the thread ‘<unnamed>’
, the println!()
in the snippet above simple prints Thread { id: ThreadId(1), name: Some("main") }
.
My guess is that the dynamically loaded library has it’s own private copy of thread-local-storage variables and thus can’t access the static EXECUTOR
.
I’ve seen the same issue with Tokio. It does work in async-std however.
On Windows, this runtime currently creates a TCP socket on 127.0.0.1 due to its poor IoEvent/Notifier implementation.
This is not acceptable since the existence of such a socket is externally visible and takes up a port on 127.0.0.1 (note that of course there may not even be a local interface with an address!), which is certainly not something that is fine for any Rust async program to do.
Instead, a Windows Event, a pipe or another appropriate mechanism must be used.
copy-pasting some suggestions from @benmkw:
We still found it pretty challenging to dig though it. We found our way to the ThreadLokalExecutor and the ioEvent and the WorkStealingExecutor but the Reactor really puzzled us. We also did not really found how the waker is used. Like there are these three ways https://boats.gitlab.io/blog/post/wakers-i/ a waker can be implemented and the flag kind of seemed like it was the first of the three (which supposedly gets set by the OS), but then sending [1] onto the socket did not really make sense to us, like why is the executor writing to my own socket? Or no this has to be a signal that the OS sends to signal the executor that the future is ready? But why is it a socket then? Why is the writer a normal socket but the reader an Async? These were questions that we took into our minds to resolve in the future (after polling some more on them I guess ;)
(Like when I’m talking about the OS im thinking of epoll/kqueue)
I think what would maybe help me is a kind of walk thought of execution from submitting a future to how its suspended, how the waker gets registered in the OS/ how the OS calls back into the executor to signal the future is ready to be polled again….
Continuing discussion from #14:
(also, if you're aiming for feature-parity, I think
task_locals
are something that can't currently be implemented in a reasonable fashion without support from the executor, so it may make sense to add support for them, but I'm not sure what level of feature-parity you're trying to achieve right now, so won't open an issue straight away -- I can do if you want)
I'm curious - what do you need task-locals for? My guess is for logging, but thought I'd ask anyway.
In theory, task-locals can be implemented without support from the executor as a separate library. Imagine you wrap spawned futures in something like:
let task = Task::spawn(SupportTaskLocals(future));
Then,
SupportTaskLocals
puts task-locals inside thread-locals every time the future is polled.
Well, logging is indeed one of the use cases of task-locals, but I also was coming from https://github.com/Ekleog/erlust ; where, to best emulate erlang from rust, I needed a way to store the return-path to the current actor as a task-local -- as well as the currently configured way to reach other actors, in order to enable cross-process transparency as well as network transparency. Disclaimer though: I haven't really touched that code for a while, and am not yet sure whether I think it's a good idea any longer anyway, so…
As for the SupportTaskLocals
solution… the main issue is that then, the user needs to remember to manually do it on each task spawn if they want to be able to use inter-actor communication from that task; while something supported by the executor (behind a feature flag maybe?) would make it seamless.
https://github.com/stjepang/smol/blob/bc3293203fc9b405dcec37834cd2228f7a9c9697/src/io_event.rs#L171
Since Notifier is defined as:
#[cfg(not(target_os = "linux"))]
type Notifier = Socket;
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.