Comments (11)
Sorry for missing the comment in closed #9568. @tallclair @rtheis
I would like to explain more detail about shim IO handler first.
The following chart is about setns process's stdout dataflow.
When setns-process exits, we don't want to have any log loss issue so we will wait for IO closed.
The pipe+w
writer should be closed after setns-process exited.
After pipe writer is closed, since there is only one writer, the pipe reader will return EOF in shim side.
And then shim closes the fifo+w
and CRI plugin will get the EOF from fifo+r
.
+--------------------------------+ +-----------shim----------+ +--------CRI--------+
| | | | | |
| setns-process/stdout (pipe+w)----------->(pipe+r)---->(fifo+w)-------->(fifo+r)-->FILE |
| | | | | |
+--------------------------------+ +-------------------------+ +-------------------+
However, setns-process can have child processes or grandchild processes.
When ExecSync
timeouts, CRI only kills setns-process
.
Since we don't have sub-cgroup to track all the processes created by the setns-process
, we can't force-stop these processes. So, these processes will hold pipe+w
file descriptor. Based on current design in shim, shim will wait for EOF from pipe+r
.
In 'sh', '-c', 'date >> /probe/readiness-log.txt && sleep 45'
case, we kill the sh
but sleep 45
is still holding the writer's file descriptor. That's why it takes so long.
Currently, I don't have better solution to cleanup all the processes created by setns-process
without introducing sub-cgroup for each exec call. So, I introduces a drain_exec_sync_io_timeout
in #7832.
By default, there is no timeout to drain IO. If user wants to fastfail after timeout, we can set it to 10ms
.
Side note: The setns-process and init container should take responsibility to cleanup the orphaned processes #10002 (comment)
from containerd.
@fuweid thank you. That works as you've noted. I'm concerned with changing the value since you all left the default value to effectively disable the timeout. I'd like a reasonable timeout but don't want data loss either. What is your recommendation?
from containerd.
I think that the default value could be 3-5 seconds.
In #3361, we use 2 seconds to drain IO. 3-5 seconds looks safe
from containerd.
I think it is the same problem.
test environment:
containerd 1.7.11
rustshim latest
test function:
containerd : TestContainerDrainExecIOAfterExit
It will exec exceed the time, and it will report a fail.In delete function. If the shim return no error , but the io still used by runc fork process , io.wait will wait long time.
May be add a time maximum time limit in io.wait ?
Fifo file write is not close will cause this problem.
rust-shim looks like use fifo directly , have no pipe in it.
friendly ping @fuweid , thanks
from containerd.
Hi @jokemanfire ,
rust-shim looks like use fifo directly , have no pipe in it.
Yes, the rust-shim opens fifo directly. Without sub-cgroup for each exec process, I don't have good solution to handle this in CRI-side. Ping @abel-von @Burning1020 for help on rust-shim issue, since they are maintainers on rust-shim.
from containerd.
Hi @fuweid @Burning1020 @abel-von
I try to resolve this problem in rust-shim at the beginning . But I think use fifo directly can reduce the copy-io 's performance loss now (I think it is a good way). How about add a timeout to io.wait to avoid waiting forever? not only for rust-shim, but for every occupation IO cases.
I think when containerd send a delete request to shim. It defaults that Shim can disable the writing end of IO. So it used io wait to wait for the remaining data to be written to completion. If it is necessary to keep waiting for the IO write end to close while deleting. Containerd believes too much in shim, I think this is not very reasonable.
from containerd.
Hi @fuweid @Burning1020 @abel-von I try to resolve this problem in rust-shim at the beginning . But I think use fifo directly can reduce the copy-io 's performance loss now (I think it is a good way). How about add a timeout to io.wait to avoid waiting forever? not only for rust-shim, but for every occupation IO cases. I think when containerd send a delete request to shim. It defaults that Shim can disable the writing end of IO. So it used io wait to wait for the remaining data to be written to completion. If it is necessary to keep waiting for the IO write end to close while deleting. Containerd believes too much in shim, I think this is not very reasonable.
@jokemanfire I have tried to solve this problem in the two directions, but all failed:
- Let shim "tell" container process to close its write side, but it is impossiable.
- Use copy-io just for exec io. Even if the read side of pipe was cloased, the rust copy-io thread was still waiting for IO, as it is a synchronized syscall different with Goruntime.
So basically, I think there is no practical solution for rust-shimv2. As you mentioned to add a timeout to io.wait, maybe it could work.
from containerd.
It defaults that Shim can disable the writing end of IO
When container holds fifo directly, the write side of fifo must be hold by the processes created by exec-init process.
It's unlikely to close it. The timeout is just for WaitIO progress. Without close event of fifo, the cri-containerd is still waiting for EOF from container. I believe that rust-shim still needs pipe and forwards bytes into fifo.
But I think use fifo directly can reduce the copy-io 's performance loss now (I think it is a good way).
Just my two cents. Correctness is first priority. And if there is performance data, it will be more convincible.
from containerd.
How about try to use copy-io first , it may not same with go-shim to implementation copy-io @Burning1020 . I will try to do this thing, resolve issues that differ from goruntime implementation。
Use copy-io just for exec io. Even if the read side of pipe was cloased, the rust copy-io thread was still waiting for IO, as it is a synchronized syscall different with Goruntime.
And show the test performance data , if the performance is worth it . And I think change process.io.wait is more valuable. @fuweid
But I still think that process.io.wait must wait for the write-side have been closed .There are no restrictions on it here ,I think it's a bit unreasonable. Thanks.
from containerd.
And show the test performance data , if the performance is worth it . And I think change process.io.wait is more valuable
Sorry I didn't follow you here. The write side of fifo is hold by processes in container instead of shim, while the CRI-containerd holds the read side of fifo. Even if you have timeout policy, the write side of fifo is still open. How does CRI-Containerd get the closed event? If you already have POC, please file pull request in rust-shim side, thanks
from containerd.
Thanks for your patience , I know that the process is runc to start that , and hold the fifo write (such like sh -c sleep 365d
), But I don't understand the containerd if it necessary to wait forever for the write end of IO to close after delete rpc request send ?That's my think point . The shim process pid file have been remove , but the io file wirte will still used by start process. containerd just close the read side will cause any problem? @fuweid
from containerd.
Related Issues (20)
- Support Multiple Images Pinned against a common key
- Better error message, if host.toml contains a syntax error
- Error: no image was built
- Sandbox/Pause containers should run on a configurable cpuset to avoid interfering with low-jitter workloads
- RFC - [content-store] Commit should check first fsync() error HOT 1
- `docker checkpoint rm` does not actually remove the checkpoint files.
- Passing OpenTelemetry (otel) Env Vars to the Shim Runtime
- Support for Wasm OCI Artifacts HOT 2
- container-shim leaks sockets
- Report the use of components with vulnerabilities in containerd HOT 1
- `ctr images pull --all-platforms --local docker.io/library/hello-world:latest` fails: `ctr: mismatched image rootfs and manifest layers` HOT 7
- oss-fuzz integration is split across (at least) 3 repositories and is fragile HOT 4
- When the overlay volatile feature is enabled, creating a pod with an image configured with anonymous volumes will fail
- When the overlay volatile feature is enabled, creating a pod with an image configured with anonymous volumes will fail
- containerd crash - containerd[501]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x129a0 pc=0x563a4cdd0d6f] HOT 1
- ImagePull failure - can't unmount tmpmount
- Need test cases for v2 loopback options HOT 2
- runtime options seem to be ignored with v2.0.0-rc.2 HOT 1
- Do not mark release/1.6 latest on release
- Pinned images are pruned under disk pressure
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from containerd.