google / gvisor-containerd-shim Goto Github PK
View Code? Open in Web Editor NEWcontainerd shim for gVisor
Home Page: https://gvisor.dev
License: Apache License 2.0
containerd shim for gVisor
Home Page: https://gvisor.dev
License: Apache License 2.0
critest v1.12
should allow privilege escalation when false
• Failure in Spec Teardown (AfterEach) [4.790 seconds]
[k8s.io] Security Context
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/framework/framework.go:72
NoNewPrivs
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/security_context.go:673
should allow privilege escalation when false [AfterEach]
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/security_context.go:709
expected log "Effective uid: 0\n" (stream="stdout") not found in logs [{timestamp:{wall:531788930 ext:63688680097 loc:0x145e220} stream:stdout log:Effective uid: 1000
}]
Expected
<bool>: false
to be true
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:544
containerd 1.2 + containerd-shim-runsc-v1
runsc commit 0b768871
Installed gvisor/runsc, containerd, gvisor-containerd-shim and containerd-shim-runsc-v1 in all nodes of kubernetes cluster.
In all nodes I am able to start the nginx container in the sandbox as is documented in [1] with runsc.
Created the Runtimeclass gvisor using runsc as per [1]description and created a Pod with the gVisor Runtime Class [1]
However, the pod is not able to start; This is the event - RuntimeHandler "runsc" not supported
3m47s Warning FailedCreatePodSandBox pod/nginx-gvisor Failed to create pod sandbox: rpc error: code = Unknown desc = RuntimeHandler "runsc" not supported
Kubernetes version below; Cluster installed via kubeadm.
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
I followed the instruction here https://github.com/google/gvisor-containerd-shim/blob/master/docs/runtime-handler-shim-v2-quickstart.md
( Installed continerd, conteinerd shim, and runsc - all are in the path, service is running)
I am getting the following error
[root@azuretest-2 ~]# sudo crictl -D runp --runtime runsc sandbox.json
DEBU[0000] RunPodSandboxRequest: &RunPodSandboxRequest{Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:nginx-sandbox2,Uid:hdishd83djaidwnduwk28bcsb,Namespace:default,Attempt:1,},Hostname:,LogDirectory:/tmp,DnsConfig:nil,PortMappings:[],Labels:map[string]string{},Annotations:map[string]string{},Linux:&LinuxPodSandboxConfig{CgroupParent:,SecurityContext:nil,Sysctls:map[string]string{},},},RuntimeHandler:runsc,}
DEBU[0000] RunPodSandboxResponse: nil
FATA[0000] run pod sandbox failed: rpc error: code = Unknown desc = failed to create containerd task: OCI runtime create failed: creating container: Sandbox: fork/exec /proc/self/exe: invalid argument: unknown
More logs
Jan 29 13:14:45 azuretest-2 containerd[22949]: time="2020-01-29T13:14:45.472182074+05:30" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nginx-sandbox2,Uid:hdishd83djaidwnduwk28bcsb,Namespace:default,Attempt:1,}"
Jan 29 13:14:45 azuretest-2 containerd[22949]: time="2020-01-29T13:14:45.587956757+05:30" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed1cbf546456dafb50aa85018eba6c7161fddd2f1911b0fbbe895f19d43f7ca1 pid=31166
Jan 29 13:14:45 azuretest-2 containerd[22949]: time="2020-01-29T13:14:45.733567999+05:30" level=info msg="shim disconnected" id=ed1cbf546456dafb50aa85018eba6c7161fddd2f1911b0fbbe895f19d43f7ca1
Jan 29 13:14:45 azuretest-2 containerd[22949]: time="2020-01-29T13:14:45.836475312+05:30" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-sandbox2,Uid:hdishd83djaidwnduwk28bcsb,Namespace:default,Attempt:1,} failed, error" error="failed to create containerd task: OCI runtime create failed: creating container: Sandbox: fork/exec /proc/self/exe: invalid argument: unknown"
Note - with runc I am able to start the container (not sure if it is some permission issue?)
sudo crictl -D runp --runtime runc sandbox.json
In the current containerd shim v2 implementation, we are not returning any stats in Stats
, because gVisor doesn't support per-container stats yet.
We should add the support when gVisor supports it.
https://gvisor.dev/docs/user_guide/kubernetes/ says:
You can also setup Kubernetes nodes to run pods in gvisor using the containerd CRI runtime and the gvisor-containerd-shim. You can use either the io.kubernetes.cri.untrusted-workload annotation or RuntimeClass to run Pods with runsc. You can find instructions here.
I prefer using the RuntimeClass CRD, but I can't seem to find anything about RuntimeClass in this repo. Is it documented elsewhere?
In pkg/v2/service.go the Create()
handler has a switch on option types. This switch is not handling one of the types that ContainerD 1.2.4 can produce, *runctypes.CreateOptions
.
Additionally, the error message unsupported option type
is extremely vague in that it doesn't indicate what part of the system the error comes from, or what the cause is.
We should have some integration tests that run with the supported versions of runsc
and containerd
to make sure that we don't break anything. This should include both crictl
based tests as well as some ctr
based tests.
A configuration as below:
cat > /etc/containerd/config.toml <<EOF
disabled_plugins = ["restart"]
# This `plugins.cri.systemd_cgroup = true` setting causes a problem in gvisor-containerd-shim
[plugins.cri]
systemd_cgroup = true
[plugins.linux]
shim = "/usr/local/bin/gvisor-containerd-shim"
shim_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
EOF
ends up with an abnormal termination of the shim as below:
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: flag provided but not defined: -systemd-cgroup
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: Usage of /usr/local/bin/gvisor-containerd-shim:
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -address string
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: grpc address back to main containerd
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -config string
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: path to the shim configuration file (default "/etc/containerd/gvisor-containerd-shim.toml")
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -containerd-binary containerd publish
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: path to containerd binary (used for containerd publish) (default "containerd")
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -debug
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: enable debug output in logs
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -namespace string
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: namespace that owns the shim
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -runtime-root string
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: root directory for the runtime (default "/run/containerd/runc")
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -socket string
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: abstract socket path to serve
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: -workdir string
Oct 25 16:44:59 ip-10-252-91-13 containerd[672]: path used to storge large temporary data
If I remove the plugins.cri.systemd_cgroup = true
setting, everything works fine.
I used the following versions for testing:
critest v1.12
mount with 'rprivate' should not support propagation
• Failure in Spec Teardown (AfterEach) [6.550 seconds]
[k8s.io] Container Mount Propagation
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/framework/framework.go:72
runtime should support mount propagation
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container_linux.go:45
mount with 'rprivate' should not support propagation [AfterEach]
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container_linux.go:60
failed to execSync in container "695e7047d9fe8720563ff13fe9f061cf6bb7974d68ed66f0228bc07a754f7f64"
Expected error:
<exec.CodeExitError>: {
Err: {
s: "command 'sh -c mount --bind /etc /tmp/testab622a576a8f2420dc6d31805dfdac185e1b4551a271f0d83abbe4222fbff146097996492/mnt/containerMntPoint' exited with 255: mount: mounting /etc on /tmp/testab622a576a8f2420dc6d31805dfdac185e1b4551a271f0d83abbe4222fbff146097996492/mnt/containerMntPoint failed: Bad address\n",
},
Code: 255,
}
command 'sh -c mount --bind /etc /tmp/testab622a576a8f2420dc6d31805dfdac185e1b4551a271f0d83abbe4222fbff146097996492/mnt/containerMntPoint' exited with 255: mount: mounting /etc on /tmp/testab622a576a8f2420dc6d31805dfdac185e1b4551a271f0d83abbe4222fbff146097996492/mnt/containerMntPoint failed: Bad address
not to have occurred
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:369
**mount with 'rshared' should support propagation from host to container and vice versa **
• Failure in Spec Teardown (AfterEach) [6.482 seconds]
[k8s.io] Container Mount Propagation
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/framework/framework.go:72
runtime should support mount propagation
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container_linux.go:45
mount with 'rshared' should support propagation from host to container and vice versa [AfterEach]
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container_linux.go:94
failed to execSync in container "a4ca1a09d097e7619ab9ea48317548af47fbf052f6e966430487bbb8a6062a81"
Expected error:
<exec.CodeExitError>: {
Err: {
s: "command 'sh -c mount --bind /etc /tmp/testf5ea918a1409f939d2234bd252d6442090176ec0847e04c4d82be23a644b5328860555100/mnt/containerMntPoint' exited with 255: mount: mounting /etc on /tmp/testf5ea918a1409f939d2234bd252d6442090176ec0847e04c4d82be23a644b5328860555100/mnt/containerMntPoint failed: Bad address\n",
},
Code: 255,
}
command 'sh -c mount --bind /etc /tmp/testf5ea918a1409f939d2234bd252d6442090176ec0847e04c4d82be23a644b5328860555100/mnt/containerMntPoint' exited with 255: mount: mounting /etc on /tmp/testf5ea918a1409f939d2234bd252d6442090176ec0847e04c4d82be23a644b5328860555100/mnt/containerMntPoint failed: Bad address
not to have occurred
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:369
runtime should support mount propagation
• Failure in Spec Teardown (AfterEach) [6.243 seconds]
[k8s.io] Container Mount Propagation
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/framework/framework.go:72
runtime should support mount propagation
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container_linux.go:45
mount with 'rslave' should support propagation from host to container [AfterEach]
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container_linux.go:128
failed to execSync in container "5218a01be8ae4d127449afc605708aef456a71359f06d2b7cffa8d84d7e0480b"
Expected error:
<exec.CodeExitError>: {
Err: {
s: "command 'sh -c mount --bind /etc /tmp/testd4433ff59e5fa368a0e781fdea3d1f54e5e50763a1a44e008c09ff9fe22f32ce123019627/mnt/containerMntPoint' exited with 255: mount: mounting /etc on /tmp/testd4433ff59e5fa368a0e781fdea3d1f54e5e50763a1a44e008c09ff9fe22f32ce123019627/mnt/containerMntPoint failed: Bad address\n",
},
Code: 255,
}
command 'sh -c mount --bind /etc /tmp/testd4433ff59e5fa368a0e781fdea3d1f54e5e50763a1a44e008c09ff9fe22f32ce123019627/mnt/containerMntPoint' exited with 255: mount: mounting /etc on /tmp/testd4433ff59e5fa368a0e781fdea3d1f54e5e50763a1a44e008c09ff9fe22f32ce123019627/mnt/containerMntPoint failed: Bad address
not to have occurred
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/container.go:369
containerd 1.2 + containerd-shim-runsc-v1
runsc commit 0b768871
According to OCI spec, when mount options include bind
or rbind
, the type field is ignored. So just setting type to tmpfs
is not enough to convert a mount, [r]bind
options also need to be removed.
Sorry ,maybe this topic is not quite proper here ,this is a shim project,but only this project
is working on containerd and gVisor.
When I walk through containerd-CRI -OCI plugin , I found one Pod only can use one kind OCI
engine(runC or runSC) , but a runSC pause container will use more resource than runC container,
while the pause container implement the identical function(setup a sandbox).
I have a new idea , can we let "runC pause container + runSC app container = a Pod" ?
BTW, I use a Docker commands to simulate the behavior , it can work ,the runSC can share netns
with a runC pause container, seems they can work each other well.
1) docker run k8s.gcr.io/pause:3.1
2) docker run --runtime=runsc -itd --network container:XXXpauseID ubuntu
We should add document about how to config the shim.
Currently we don't support save/restore via containerd with the shim. We need to add support for that.
Support the oom_score_adj CRI option. oom_score_adj should be set on the gVisor sandbox based on the oom_score_adj set on each container in the pod.
We should add CRI validation test in the integration test:
https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/validation.md.
Following the runtime handler start guide and the examples for containerd support, we are unable to pass debug=true
to runsc.
/etc/containerd/config.toml
with:disabled_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
[plugins.cri.containerd.runtimes.runsc.options]
TypeUrl = "io.containerd.runsc.v1.options"
ConfigPath = "/run/containerd/runsc/config.toml"
/run/containerd/runsc/config.toml
[runsc_config]
debug = "true"
debug-log = "/tmp/runsc/runsc.log.%ID%.%TIMESTAMP%.%COMMAND%"
user-log = "/tmp/runsc/runsc.log.%ID%.user"
ps -ef | grep runsc
-- debug is never set to true.operating system: ubuntu 18.04.03
containerd version: 1.3.3
kernel version: 4.15.0-1060-aws
crictl version: 1.15.0
I've also noticed that running the nginx sandbox example also doesn't work, you get bit with
FATA[0000] run pod sandbox failed: rpc error: code = Unknown desc = failed to setup network for sandbox "2d8a12cac60e2d30e394ce53df615f125f967af60f48d837b9beb940abcab834": pods "nginx-sandbox" not found
After the changes in #28, the call to runtime.Wait
here now uses context.Background
. We've observed this causing failures when attempting to Wait
on a very short-lived container, since the container will then not be around to wait, and the exit status will be reported as internalErrorCode
(128).
There does not appear to be a mechanism to detect this race condition through the shim; there's no error type returned that we could use to detect this, and it's not possible distinguish between an exit status of 128
returned legitimately by the wait operation and one returned by the shim.
Any suggestions on how to distinguish between these two cases ("real" 128 exit status vs. "race condition and can't wait" 128 exit status)?
We may want similar optimization with containerd/containerd#3711.
Or leverage google/gvisor#238
When I follows the official doc[https://github.com/google/gvisor-containerd-shim/blob/master/docs/runtime-handler-quickstart.md] and try to install gvisor-containerd-shim, somthing goes wrong!
[root@localhost runsc]# LATEST_RELEASE=$(wget -qO - https://api.github.com/repos/google/gvisor-containerd-shim/releases | grep -oP '(?<="browser_download_url": ")https://[^"]*' | head -1)
[root@localhost runsc]# echo $LATEST_RELEASE
https://github.com/google/gvisor-containerd-shim/releases/download/v0.0.1/containerd-shim-runsc-v1.linux-amd64
[root@localhost runsc]# wget -O gvisor-containerd-shim
wget: missing URL
Usage: wget [OPTION]... [URL]...
Try `wget --help' for more options.
The step " wget -O gvisor-containerd-shim" should be "wget -O gvisor-containerd-shim $LATEST_RELEASE"
I have verified that "wget -O gvisor-containerd-shim $LATEST_RELEASE" is OK.
If you use kubeadm with Docker installed then it prefers the docker.sock rather than CRI via containerd.sock. Using docker's socket file means that the runtime can't support CRI runtime handlers.
Found originally in minikube. Original bug: kubernetes/minikube#3446
It looks like gvisor-containerd-shim contains a race condition where it runs runsc to exec a process in the sandbox and immediately tries to read the internal pid for the new container process, but many times runsc hasn't created the pid in time and gvisor-containerd-shim fails to read it.
Relevant error is in exec.go
gvisor-containerd-shim/pkg/proc/exec.go
Line 215 in ae2250b
Script to set up containerd environment:
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.13.0/crictl-v1.13.0-linux-amd64.tar.gz \
https://storage.googleapis.com/gvisor/releases/nightly/2018-12-07/runsc \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/containerd/containerd/releases/download/v1.2.0-beta.0/containerd-1.2.0-beta.0.linux-amd64.tar.gz \
https://github.com/google/gvisor-containerd-shim/releases/download/v0.0.1-rc.0/gvisor-containerd-shim-v0.0.1-rc.0.linux-amd64
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin
sudo mv runsc runsc
sudo mv gvisor-containerd-shim-v0.0.1-rc.0.linux-amd64 gvisor-containerd-shim
sudo mv runc.amd64 runc
chmod +x runc runsc gvisor-containerd-shim
sudo mv runc runsc /usr/local/bin/
sudo tar -xvf crictl-v1.13.0-linux-amd64.tar.gz -C /usr/local/bin/
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
sudo tar -xvf containerd-1.2.0-beta.0.linux-amd64.tar.gz -C /
sudo mv gvisor-containerd-shim /bin
sudo sh -c 'echo "runtime-endpoint: unix:///run/containerd/containerd.sock" > /etc/crictl.yaml'
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "10.200.0.0/24"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"type": "loopback"
}
EOF
sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
disabled_plugins = ["restart"]
[plugins]
[plugins.linux]
shim = "/bin/gvisor-containerd-shim"
shim_debug = true
[plugins.cri]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
EOF
cat << EOF | sudo tee /etc/containerd/gvisor-containerd-shim.toml
# This is the path to the default runc containerd-shim.
runc_shim = "/bin/containerd-shim"
EOF
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable containerd
sudo systemctl start containerd
• Failure in Spec Teardown (AfterEach) [6.301 seconds]
[k8s.io] Security Context
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/framework/framework.go:72
NamespaceOption
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/security_context.go:72
runtime should support ContainerPID [AfterEach]
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/security_context.go:224
Expected
<string>: /pause
to contain substring
<string>: master process
/root/go/src/github.com/kubernetes-sigs/cri-tools/pkg/validate/security_context.go:246
briefly describe what happens:
crictl runp xxx.json
{
...
"linux": {
"security_context": {
"namespace_options": {
"pid": "CONTAINER"
}
}
}
}
crictl create & start
{
...
"image": "nginx",
"linux": {
"security_context": {
"namespace_options": {
"pid": "CONTAINER"
}
}
}
}
exec in container cat /proc/1/cmdline
got: /pause
expected: 'master process'
containerd 1.2 + containerd-shim-runsc-v1
runsc: commit: 0b768871
We implement the first version based on shim v1 api, because containerd 1.1 is still the containerd version most people use now.
However, shim v2 api is a more elegantly designed integration api for gvisor.
Now containerd has released 1.2.1, we should move to shim v2 api in next version.
I often see this error in containerd logs and when running crictl
to delete a container sandbox. Not sure if it's a shim issue but creating to track.
$ sudo crictl stopp ${SANDBOX_ID}
FATA[0000] stopping the pod sandbox "ecc27974c76b134dd6e1f87593a44b24ffa88e97358b22a9ebe2830bd0fa40b6" failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "ecc27974c76b134dd6e1f87593a44b24ffa88e97358b22a9ebe2830bd0fa40b6": running [/sbin/iptables -t nat -D POSTROUTING -s fe80::88e8:2fff:fe88:eef0/64 -j CNI-ec7c908de84d219cb3afd915 -m comment --comment name: "bridge" id: "ecc27974c76b134dd6e1f87593a44b24ffa88e97358b22a9ebe2830bd0fa40b6" --wait]: exit status 2: iptables v1.6.1: invalid mask `64' specified
I investigated google/gvisor#2433 .
We probably can forward oom score notifications by using oom.Epoller
(this is in the containerd/pkg/oom
package) like runc's shim.
・https://github.com/containerd/containerd/blob/master/runtime/v2/runc/v2/service.go#L76-L80
・https://github.com/containerd/containerd/blob/master/runtime/v2/runc/v2/service.go#L334-L336
While oom.Epoller
is supported on containerd v1.3
, gvisor-containerd-shim depends on containerd v1.2
and there are several other changes we have to follow between v1.2 and v1.3.
Can we move to v1.2
to v1.3
? Or should we use v1.2
and implement Epoller like runc's shim?
To: @ianlewis
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.