Giter Club home page Giter Club logo

kuasar's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kuasar's Issues

make install failed on licheepi4a(riscv) openEuler 23.03

What happened?

Hi, I want to run it on my riscv hardware.
I add 2 lines in vmm/sandbox/src/qemu/config.rs:46:

#[cfg(target_arch = "riscv64")]
const DEFAULT_QEMU_PATH: &str = "/usr/bin/qemu-system-riscv64";

Add then I want to install kuasar on my riscv hardware licheepi4a
using openEuler23.03, by using following instructions:

git clone https://github.com/kuasar-io/kuasar.git
cd kuasar
make all
make install

It stuck at "make all", and output info is:

   Compiling vmm-sandboxer v0.1.0 (/root/Applications/kuasar/vmm/sandbox)
warning: unused import: `collections::HashMap`
  --> src/qemu/config.rs:17:11
   |
17 | use std::{collections::HashMap, os::unix::io::RawFd};
   |           ^^^^^^^^^^^^^^^^^^^^
   |
   = note: `#[warn(unused_imports)]` on by default

warning: unreachable statement
   --> src/qemu/config.rs:176:9
    |
173 | /         return Err(Error::Unimplemented(
174 | |             "cpu other than x86 not supported".to_string(),
175 | |         ));
    | |__________- any code following this expression is unreachable
176 | /         if !self.firmware_path.is_empty() {
177 | |             result.bios = Some(self.firmware_path.to_string());
178 | |         }
    | |_________^ unreachable statement
    |
    = note: `#[warn(unreachable_code)]` on by default

warning: `vmm-sandboxer` (lib) generated 2 warnings
    Finished release [optimized] target(s) in 7m 35s
+ exit_flag=0
+ export IMAGE_NAME=centos:7
+ IMAGE_NAME=centos:7
+ export ROOTFS_DIR=/tmp/kuasar-rootfs
+ ROOTFS_DIR=/tmp/kuasar-rootfs
+ export CONTAINER_RUNTIME=containerd
+ CONTAINER_RUNTIME=containerd
+ CONTAINERD_NS=default
+ '[' containerd == containerd ']'
++ awk -F/ '{print NF}'
+ image_parts=1
+ echo 'image_parts 1'
image_parts 1
+ [[ 1 -le 1 ]]
+ IMAGE_NAME=docker.io/library/centos:7
+ echo 'build in docker.io/library/centos:7'
build in docker.io/library/centos:7
+ '[' '!' -n '' ']'
+++ readlink -f vmm/scripts/image/centos/build.sh
++ dirname /root/Applications/kuasar/vmm/scripts/image/centos/build.sh
+ current_dir=/root/Applications/kuasar/vmm/scripts/image/centos
+ pushd /root/Applications/kuasar/vmm/scripts/image/centos/../../../..
~/Applications/kuasar ~/Applications/kuasar
++ pwd
+ REPO_DIR=/root/Applications/kuasar
+ popd
~/Applications/kuasar
+ rm -rf /tmp/kuasar-rootfs
+ mkdir -p /tmp/kuasar-rootfs
+ case "${CONTAINER_RUNTIME}" in
++ ctr -n default images ls
++ grep docker.io/library/centos:7
++ wc -l
ctr: failed to dial "/run/containerd/containerd.sock": context deadline exceeded: connection error: desc = "transport: error while dialing: dial unix:///run/containerd/containerd.sock: timeout"
+ image_count=0
+ [[ 0 -lt 1 ]]
+ ctr -n default images pull docker.io/library/centos:7
ctr: failed to dial "/run/containerd/containerd.sock": context deadline exceeded: connection error: desc = "transport: error while dialing: dial unix:///run/containerd/containerd.sock: timeout"
+ fn_check_result 1 'ctr pull image failed!'
+ '[' 1 '!=' 0 ']'
+ echo 'FAILED: return 1 not as expected! (ctr pull image failed!)'
FAILED: return 1 not as expected! (ctr pull image failed!)
+ (( exit_flag++ ))
++ date +%s
+ container_name=rootfs_builder-1703845538
+ ctr -n default run --rm --net-host --env http_proxy= --env https_proxy= --env ROOTFS_DIR=/tmp/kuasar-rootfs --mount type=bind,src=/root/Applications/kuasar,dst=/kuasar,options=rbind:rw --mount type=bind,src=/tmp/kuasar-rootfs,dst=/tmp/kuasar-rootfs,options=rbind:rw docker.io/library/centos:7 rootfs_builder-1703845538 bash -x /kuasar/vmm/scripts/image/centos/build_rootfs.sh
ctr: failed to dial "/run/containerd/containerd.sock": context deadline exceeded: connection error: desc = "transport: error while dialing: dial unix:///run/containerd/containerd.sock: timeout"
+ fn_check_result 1 'ctr run rootfs_builder-1703845538 return error!'
+ '[' 1 '!=' 0 ']'
+ echo 'FAILED: return 1 not as expected! (ctr run rootfs_builder-1703845538 return error!)'
FAILED: return 1 not as expected! (ctr run rootfs_builder-1703845538 return error!)
+ (( exit_flag++ ))
+ '[' 2 '!=' 0 ']'
+ rm -rf /tmp/kuasar-rootfs
+ exit 2
make: *** [Makefile:30: bin/kuasar.img] Error 2

It successfully compile the rust src, but stuck when running the script above.

The problem seems like FAILED: return 1 not as expected! (ctr pull image failed!) and ctr: failed to dial "/run/containerd/containerd.sock": context deadline exceeded: connection error: desc = "transport: error while dialing: dial unix:///run/containerd/containerd.sock: timeout".

What did you expect to happen?

I expect it can successfully make all and make install

How can we reproduce it (as minimally and precisely as possible)?

install on riscv openEuler23.03:

git clone https://github.com/kuasar-io/kuasar.git
cd kuasar
make all
make install

I didn't try it on qemu, so l'm not sure does it the same on qemu-riscv64

Anything else we need to know?

No response

Dev environment

rustc 1.67.1-dev (d5a82bbd2 2023-02-07) (built from a source tarball)

Execute 'HYPERVISOR=stratovirt make vmm' build command failed caused by golang installation

What happened?

Execute 'HYPERVISOR=stratovirt make vmm' build command failed caused by golang installation,

From the following output info, we can figure out that golang repo with the https://mirror.go-repo.io/centos/go-repo.repo address doesn't support aarch64.

......
+ build_runc /kuasar                                                                                                                                        
+ local repo_dir=/kuasar                                                                                                                                    
+ rpm --import https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO                                                                                         
+ curl -s https://mirror.go-repo.io/centos/go-repo.repo                                                                                                     
+ tee /etc/yum.repos.d/go-repo.repo                                                                                                                         
[go-repo]                                                                                                                                                   
name=go-repo - CentOS                                                                                                                                       
baseurl=https://mirror.go-repo.io/centos/$releasever/$basearch/                                                                                             
enabled=1                                                                                                                                                   
gpgcheck=1                                                                                                                                                  
gpgkey=https://mirror.go-repo.io/centos/RPM-GPG-KEY-GO-REPO                                                                                                 
+ yum install -y golang make                                                                                                                                
Loaded plugins: fastestmirror, ovl                                                                                                                          
Loading mirror speeds from cached hostfile                                                                                                                  
 * base: ftp.yz.yamagata-u.ac.jp                                                                                                                            
 * epel: ftp.iij.ad.jp                                                                                                                                      
 * extras: mirror.aktkn.sg                                                                                                                                  
 * updates: mirror.aktkn.sg                                                                                                                                 
https://mirror.go-repo.io/centos/7/aarch64/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found                                                      
Trying other mirror.                                                                                                                                        
......

What did you expect to happen?

install the golang package successfully when building guest os rootfs in the container.

How can we reproduce it (as minimally and precisely as possible)?

Execute 'HYPERVISOR=stratovirt make vmm' in the aarch64 environment.

Anything else we need to know?

Nothing.

Dev environment

Dev environment: ARM64 server

Umbrella issue for 2023 H2

Description

This is an umbrella issue to track some future plans.

CI/CD

  • Publish new release by Github Action #81
  • Enhancement of CI lints (Cargo clippy warnings fixup) #82

Core framework

Add observability to the project

What would you like to be added?

Add more observabilities to the project by opentracing

Why is this needed?

Adding rich observability to the project not only helps developers better understand the code flow, but also provides additional information for performance analysis and monitoring.

[Propose] Kuasar Shim and Domain Sandoxer Server

What would you like to be added?

Currently, the various sandboxers in kuasar are running as domain servers resident in the nodes.
The communication between containerd and sandboxer is done by modifying the logic of the Sandbox and Runtime for Containerd, and the impact of these modifications on Containerd is huge.

Can we achieve the goal of kuasar without modifying Containerd, or without making huge changes?

The goal of kuasar as I understand it should be:

  1. Ability to support multiple sandbox types
  2. No large number of long-running shim processes

Currently kuasar removes long running shim by sandboxer of domain server, kuasar-io/containerd links sandboxer server by specifying sandboxer address in configuration and modifies Sandbox interface and runtime logic.

[proxy_plugins.vmm]
  type = "sandbox"
    address = "/run/vmm-sandboxer.sock"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.vmm]
  runtime_type = "io.containerd.kuasar.v1"
  sandboxer = "vmm"
  io_type = "hvsock"
[proxy_plugins.wasm]
  type = "sandbox"
  address = "/run/wasm-sandboxer.sock"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasm]
  runtime_type = "io.containerd.wasm.v1"
  sandboxer = "wasm"

We can actually keep running sandboxer as a domain server, but we don't need to make any changes to Containerd (of course there may be minor changes, but this is much smaller than kuasar-io/containerd).
I think Sandbox and the core is the Sandbox server, which is a very flexible and versatile design, and I think we should try to use it as much as possible.

We keep the lightweight shim binary for containerd to call, and the shim start command will connect to the domain server to request the address of the task server and sandbox server, and return that address to containerd's runtime(shim manager).
Unlike containerd-shim-runc-v2, the start subcommand of kuaras shim(eg. containerd-shim-kuasarvmm-<version> or containerd-shim-kuasarwasm-<version>) does not fork itself to start the task server and sandbox server, they will exit after returning the address and it does not run for long.
kuasar shim

This is already used in runwasi

The Shim is no longer a shim for containerd and runc or other underlying runtimes, but a shim for containerd and sandboxer server

Interacting with the domain sandboxer server through shims that do not run for long periods of time is fully compatible with the current sandbox and runtime design, and ensures that our goal of not having a lot of long-running shim processes is met.


BTW, Also I found a new question, what if we update the sandboxer server with minimal impact, since sandboxer manages all the underlying container processes, it might be important how we can upgrade it more smoothly.
If we have the perfect upgrade solution, maybe we can have all sandboxers merged into one kuasar-sandboxer domain server

Maybe we could use wasm to implement the core logic in kuasar-sandboxer and plug in the specific runtime implementation through wasm, that's just an idea.:rofl:

Why is this needed?

We can actually keep running sandboxer as a domain server, but we don't need to make any changes to Containerd (of course there may be minor changes, but this is much smaller than kuasar-io/containerd).
I think Sandbox and the core is the Sandbox server, which is a very flexible and versatile design, and I think we should try to use it as much as possible.

support sandbox device recovery during sandboxer restarting

What would you like to be added?

sandbox device information recovery during sandboxer restarting has not inplemented yet

Why is this needed?

Previous sandbox device needed to be managed after sandboxer restarts, otherwise it may cause problems like device residual.

Support for runc container

What would you like to be added?

Support to run Runc containers.

Why is this needed?

Runc container uses Linux Namespace/Cgroup trchnology to isolate containers and it is widely used.

Should kill all processes even though bundle path has been removed

What happened?

When container exits,we should kill all processes even though bundle path has been removed, otherwise maybe it will cause process leak.

async fn should_kill_all_on_exit(bundle_path: &str) -> bool {
    match read_spec(bundle_path).await {
        Ok(spec) => has_shared_pid_namespace(&spec),
        Err(e) => {
            error!(
                "failed to read spec when call should_kill_all_on_exit: {}",
                e
            );
            false
        }
    }
}

containerd shim: containerd/containerd@a687d3a#diff-3cd530e1b2d5464c8186bf3bf63e8be48df4107e853ee107751f862c7b503658R549-R554

What did you expect to happen?

When the bundle path has been removed concurrently,we must guarantee other processes can still been killed.

How can we reproduce it (as minimally and precisely as possible)?

create pod and delete it continuously, sometimes we will find process leak.

Anything else we need to know?

No response

Dev environment

No response

Support limit VM process vcpu threads and non-vcpu threads seperately in the cgroup system

What would you like to be added?

  1. Create the sandbox level cgroup dir to limit the VM process resource usage.
  2. For the CPU type cgroup subsystem specifially, the cgroup directory at the sandbox granularity is subdivided into two cgroup subdirectories, vcpu and pod_overhead. These respectively impose resource limitations on the vcpu threads and non-vcpu threads in the vm process.

Why is this needed?

PodOverhead is a new mechanism introduced by K8S in the context of secure containers. The reason is that for Pods of the secure container type, in addition to the resources required for the app containers to run in the Pod, it is also necessary to consider the additional resource overhead introduced by virtualization. This additional resource overhead can be explicitly declared and configured in the Pod's yaml file, so that during the scheduling process of the k8s node, the PodOverhead part of the resources will be taken into account, and added together with the resource Limit values declared in all containers in the Pod as the total resources requested by the entire Pod.

Therefore, it is also necessary to consider the resource overhead of PodOverhead. The vcpu threads running the actual container load in the virtualization process and the non-vcpu threads in the virtualization process (such as virtualization IO threads, Vhost threads, etc.) need to be subdivided to ensure that the resources allocated to the container processes running in the virtual machine are consistent with the declared available resources.

The sandbox cgroup management overview like the following diagram:
image.png

The implementation of stats task service and Cgroup setup for wasm sandboxer

What would you like to be added?

When using Wasm Sandboxer, the stats service of TaskService finally calls the stats of WasmEdgeInitLifecycle, which is not implemented yet.

impl ProcessLifecycle<InitProcess> for WasmEdgeInitLifecycle {
    ...
    async fn stats(&self, _p: &InitProcess) -> containerd_shim::Result<Metrics> {
        Err(Error::Unimplemented(
            "exec not supported for wasm containers".to_string(),
        ))
    }
    ...
}

However, the ContainerStats CRI service collects resource stats data by calling the stats task service.

The implementation of Containerd shim sets up Cgroup for a container when running one and collects stats data by Cgroup. Maybe we should consider doing the same things for wasm containers for metrics and resource quotas.

Why is this needed?

The ContainerStats is periodly called by kubelet for collecting resource usage information of containers, which is important for HPA and other stuff. But it's not working now and keeps getting error on calling the service.

ERRO[2023-08-22T07:30:15.041190124Z] collecting metrics for aa75c29929274eff3dc655406f8a64d4f6f881ff0e38129a01265b70d4736c1e  error="Others(\"Unimplemented method: exec not supported for wasm containers\"): unknown"

In addition, it's also useful for resource quotas.

Optimize the vmm_task's log output format

What should be cleaned up or changed?

Currently, vmm_task's log is redirected into the host by vmm-sandboxer process.
However, vmm-sandboxer process just use the debug!() macro to print the vmm_task's logs into the stdout, which will add the redundant log header like this:

 [2023-06-27T07:24:46.160098Z DEBUG vmm_sandboxer::utils] console: [2023-06-27T07:24:45.531189Z DEBUG vmm_task::task] receive exit event: PID 87 exit with code 0                    
[2023-06-27T07:24:46.160198Z DEBUG vmm_sandboxer::utils] console: [2023-06-27T07:24:45.531308Z DEBUG vmm_task::io] copy_io: pipe stdout from to hvsock://vsock:2000                 
[2023-06-27T07:24:46.161295Z DEBUG vmm_sandboxer::utils] console: [2023-06-27T07:24:45.532399Z INFO  containerd_shim::asynchronous::task] Create request for b3fdff9ae24160e1fd21a36c6e87af6a47c4cd49a300100867aeae86390c1c63 returns pid 96               

What we want to achieve like this:

[2023-06-27T07:24:45.531189Z DEBUG vmm_task::task] receive exit event: PID 87 exit with code 0 
[2023-06-27T07:24:45.531308Z DEBUG vmm_task::io] copy_io: pipe stdout from to hvsock://vsock:2000 
[2023-06-27T07:24:45.532399Z INFO  containerd_shim::asynchronous::task] Create request for b3fdff9ae24160e1fd21a36c6e87af6a47c4cd49a300100867aeae86390c1c63 returns pid 96

Why is this needed?

Remove the redundant log header for vmm_task components log, make a more cleaner log output!

make vmm failed due to incompatible glibc version

What happened?

building vmm with following command will causes GLIBC not found issue

HYPERVISOR=stratovirt make vmm

Error messages:

Downloaded prost-build v0.8.0
  Compiling proc-macro2 v1.0.56
  Compiling cfg-if v1.0.0
  Compiling libc v0.2.142
  Compiling anyhow v1.0.70
  Compiling pin-project-lite v0.2.9
  Compiling log v0.4.17
  Compiling bytes v1.4.0
  Compiling scopeguard v1.1.0
error: failed to run custom build command for `libc v0.2.142`

Caused by:
 process didn't exit successfully: `/kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build` (exit status: 1)
 --- stderr
 /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build: /lib64/libc.so.6: version `GLIBC_2.29' not found (required by /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build)
 /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build)
 /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build)
 /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build)
 /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /kuasar/vmm/target/release/build/libc-f73d1e1fd25036ac/build-script-build)
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `log v0.4.17`

Caused by:
 process didn't exit successfully: `/kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build` (exit status: 1)
 --- stderr
 /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build)
 /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build: /lib64/libc.so.6: version `GLIBC_2.33' not found (required by /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build)
 /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build)
 /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /kuasar/vmm/target/release/build/log-1c29e1bfb4ad3fc5/build-script-build)
make: *** [bin/vmm-task] Error 101
make: *** [Makefile:27: bin/kuasar.initrd] Error 2

What did you expect to happen?

build successfully

How can we reproduce it (as minimally and precisely as possible)?

Build vmm with following command:

HYPERVISOR=stratovirt make vmm

Anything else we need to know?

Based on my analysis, the problem is due to cross-environment build between host and container.
In the Makefile, kuasar builds vmm-sandboxer with host environment, and rust will save all the dependency in vmm/target/release. While, kuasar builds vmm-task within a container environment, but with the same kuasar workspace mounted, so the build procedure for vmm-task will share the vmm/target/release directory with vmm-sandboxer. Since the different environments get involved, glibc version could be different.
On the other hand, vmm-sandboxer is built with host target, while vmm-task is built with musl libc, that could also cause dependency incompatible since they share the same target/release directory?

By cleaning up the target before vmm-task build, the issue can be resolved.
However, this is still a bug which needs to be fixed.

Dev environment

No response

Improve the example scripts

Crictl has added a validation that requires UID, name, and namespace to be fully specified in the definition of the pod, otherwise reports message: getting sandbox status of pod "xxx": metadata.Name, metadata.Namespace or metadata.Uid is not in metadata "&PodSandboxMetadata{Name:xxx,Uid:,Namespace:default,Attempt:0,}" when using crictl to remove wasm pod. The definition of pod.json in both example scripts is missing the specification for UID.
image

Limit container process in CgroupV2

What would you like to be added?

Limit the container process in CgroupV2

Why is this needed?

Cgroup is used to restrict process group resources in Linux. It has two versions: cgroup v1 and cgroup v2, the latter of which is designed to replace its predecessor.

quark install page 404

What should be cleaned up or changed?

Quark: To use Quark, please refer to the installation instructions here.

link invalid ,page 404

Why is this needed?

install quark page detail

failed to start shim: symlink: no such file or directory: unknown

What happened?

E0609 10:45:54.021326 14308 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to start shim: symlink /run/kuasar-vmm/5fc2f80bdcb842544b8b468588c93d75766bb03326954c96dba00bae196cfca8/c74f27e812a9bec410eade712ab87a40303848001a269a01d63ccee3ce2f58c0 /run/containerd/io.containerd.runtime.v2.task/k8s.io/c74f27e812a9bec410eade712ab87a40303848001a269a01d63ccee3ce2f58c0: no such file or directory: unknown" containerID="c74f27e812a9bec410eade712ab87a40303848001a269a01d63ccee3ce2f58c0"
FATA[0041] running container: starting the container "c74f27e812a9bec410eade712ab87a40303848001a269a01d63ccee3ce2f58c0": rpc error: code = Unknown desc = failed to create containerd task: failed to start shim: symlink /run/kuasar-vmm/5fc2f80bdcb842544b8b468588c93d75766bb03326954c96dba00bae196cfca8/c74f27e812a9bec410eade712ab87a40303848001a269a01d63ccee3ce2f58c0 /run/containerd/io.containerd.runtime.v2.task/k8s.io/c74f27e812a9bec410eade712ab87a40303848001a269a01d63ccee3ce2f58c0: no such file or directory: unknown

What did you expect to happen?

1

How can we reproduce it (as minimally and precisely as possible)?

1

Anything else we need to know?

1

Dev environment

No response

Umbrella issue for basic function

Description

This is an umbrella issue to track some basic functions.

CI/CD

  • Run Cargo clippy -D warning and Cargo deny in Github action #22 @dierbei
  • Prepare for the physical machine: Postpone
  • #4 @flyflypeng

Functions

  • Privileged container in vmm-sandboxer : Supported
  • Resource usage stats for both container and sandbox : Supported
  • Time synchronization between Host and Guest VM @Burning1020 #50
  • Limit VM process into cgroup filesystem @flyflypeng #51
  • Sandbox files persist and recover @Vanient #40

Can't install containerd

What happened?

I got error when I installed containerd make install

+ install bin/ctr bin/containerd bin/containerd-stress bin/containerd-shim bin/containerd-shim-runc-v1 bin/containerd-shim-runc-v2
install: cannot stat 'bin/ctr': No such file or directory
install: cannot stat 'bin/containerd': No such file or directory
install: cannot stat 'bin/containerd-stress': No such file or directory
install: cannot stat 'bin/containerd-shim': No such file or directory
install: cannot stat 'bin/containerd-shim-runc-v1': No such file or directory
install: cannot stat 'bin/containerd-shim-runc-v2': No such file or directory
make: *** [Makefile:420: install] Error 1

Here is the output of make. I'm not sure if anything is missing.

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Suggested packages:
  make-doc
The following NEW packages will be installed:
  make
0 upgraded, 1 newly installed, 0 to remove and 39 not upgraded.
Need to get 180 kB of archives.
After this operation, 426 kB of additional disk space will be used.
Get:1 http://us-east-2.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 make amd64 4.3-4.1build1 [180 kB]
Fetched 180 kB in 0s (10.6 MB/s)
Selecting previously unselected package make.
(Reading database ... 64801 files and directories currently installed.)
Preparing to unpack .../make_4.3-4.1build1_amd64.deb ...
Unpacking make (4.3-4.1build1) ...
Setting up make (4.3-4.1build1) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...
Scanning processor microcode...
Scanning linux images...

Running kernel seems to be up-to-date.

The processor microcode seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.

### What did you expect to happen?

The installation succeeds.

### How can we reproduce it (as minimally and precisely as possible)?

I followed the [instruction](https://github.com/kuasar-io/kuasar/blob/main/docs/containerd.md#building-and-installing-containerd). 

### Anything else we need to know?

_No response_

### Dev environment

_No response_

Build vmm-sandboxer failed when compiling rtnetlink v0.12.0 version

What happened?

Execute the cargo build --release --features=cloud_hypervisor command to build vmm-sandboxer binary, but get the following compiler error
image

What did you expect to happen?

Build successfully!

How can we reproduce it (as minimally and precisely as possible)?

  1. git fetch the latest version from the main branch
  2. remove the Cargo.lock in the local dir
  3. execute the cargo build --release --features=cloud_hypervisor command

Anything else we need to know?

rtnetlink v0.12.0 crate depends on the [netlink-proto](https://github.com/rust-netlink/netlink-proto) crate, which is updated from the v0.11.1 to v0.11.2. But some trait and struct types in netlink-proto crate are changed, which may casue the compiler error.

Dev environment

No response

Sandboxer: Replace features with bin parameter

What should be cleaned up or changed?

Currently there are three features in vmm/sandbox/src/main.rs, cloud_hypervisor,qemu and stratovirt, developer can use --features to control which sandbox that need to build.

We can use src/bin to solve this.

Why is this needed?

However, it's not concise to mix these features in one single file, too many cfgs makes code redundant.

Publish new release by Github Action

What would you like to be added?

Add a new Github Action for new releases. Like this: https://github.com/kuasar-io/kuasar/releases/tag/v0.1.0

The fist tar file kuasar-vx.y.z-linux-amd64.tar.gz compresses this binaries:

  • vmm-sandboxer(for Cloud Hypervisor) and config_clh.toml
  • kuasar.image and vmlinux.bin for Cloud Hypervisor
  • wasm-sandboxer(for wasmEdge)
  • quark-sandboxer
  • containerd binary and config.toml of Kuasar community
  • shim folder with two shim binaries

The second tar file kuasar-vx.y.z-vendor.tar.gz compresses the source code with vendor code.

The x.y.z represents the tag or version number of this release.

Why is this needed?

Currently, Kuasar releases are manually executed and should be improved with automatic pipeline.

Support log with tag to distinguish sandbox/container

What would you like to be added?

Specific sandbox or container related logs should print with that sandbox or container ID (or any other necessary information).

Why is this needed?

When sandboxes or containers are started concurrently, it is difficult to distinguish which logs belong to which sandbox or container.

Support configure the vmm-sandboxer's log level in the config file

What would you like to be added?

Add a new config field under '[sandbox]' module in the config toml file:

[sandbox]
# Set the vmm-sandboxer process's log level
# The valid log_level value is: "error", "warn", "info", "debug", "trace"
# Default log level is "info"
log_level = "info"

Why is this needed?

Currently, vmm-sandboxer's log_level is specified by RUST_LOG env when starting the vmm-sandboxer process.
But it is not a good way to configure the vmm-sandboxer's log_level especially when running vmm-sandboxer process as a systemd service.
So we want to add a more flexible method to configure the vmm-sandboxer's log level in the config file.

Crashed while running wasm program on Ubuntu 22.04

What happened?

i'm testing on ubuntu22.04.

run the example script"run_example_wasm_container.sh",
the output info is

rustup target add wasm32-wasi
info: component 'rust-std' for target 'wasm32-wasi' is up to date
cd crates/wasi-demo-app && cargo build
    Updating crates.io index
   Compiling wasi-demo-app v0.3.1 (/tmp/runwasi/crates/wasi-demo-app)
    Finished dev [unoptimized + debuginfo] target(s) in 1m 30s
cd crates/wasi-demo-app && cargo build  --features oci-v1-tar
  Downloaded clap_derive v4.4.7
  Downloaded aho-corasick v1.1.2
  Downloaded syn v1.0.109
  Downloaded clap_builder v4.4.9
  Downloaded env_logger v0.10.1
  Downloaded humantime v2.1.0
  Downloaded libc v0.2.151
  Downloaded anstyle-parse v0.2.2
  Downloaded async-trait v0.1.73
  Downloaded clap v4.4.10
  Downloaded derive_builder_core v0.12.0
  Downloaded digest v0.10.7
  Downloaded getset v0.1.2
  Downloaded regex-automata v0.4.1
  Downloaded quote v1.0.33
  Downloaded memchr v2.6.4
  Downloaded sha2 v0.10.8
  Downloaded thiserror v1.0.50
  Downloaded utf8parse v0.2.1
  Downloaded tar v0.4.40
  Downloaded anyhow v1.0.75
  Downloaded bitflags v2.4.0
  Downloaded block-buffer v0.10.4
  Downloaded colorchoice v1.0.0
  Downloaded derive_builder v0.12.0
  Downloaded hex v0.4.3
  Downloaded heck v0.4.1
  Downloaded darling_core v0.14.4
  Downloaded pin-project-lite v0.2.13
  Downloaded oci-spec v0.6.4
  Downloaded rustix v0.38.21
  Downloaded serde_derive v1.0.193
  Downloaded strsim v0.10.0
  Downloaded termcolor v1.3.0
  Downloaded typenum v1.17.0
  Downloaded anstyle v1.0.4
  Downloaded anstyle-query v1.0.0
  Downloaded clap_lex v0.6.0
  Downloaded cpufeatures v0.2.9
  Downloaded filetime v0.2.22
  Downloaded generic-array v0.14.7
  Downloaded itoa v1.0.9
  Downloaded proc-macro-error v1.0.4
  Downloaded ryu v1.0.15
  Downloaded serde_json v1.0.108
  Downloaded thiserror-impl v1.0.50
  Downloaded version_check v0.9.4
  Downloaded bytes v1.5.0
  Downloaded darling v0.14.4
  Downloaded errno v0.3.5
  Downloaded ident_case v1.0.1
  Downloaded proc-macro2 v1.0.69
  Downloaded serde v1.0.193
  Downloaded linux-raw-sys v0.4.10
  Downloaded anstream v0.6.4
  Downloaded darling_macro v0.14.4
  Downloaded is-terminal v0.4.9
  Downloaded regex v1.10.0
  Downloaded sha256 v1.4.0
  Downloaded cfg-if v1.0.0
  Downloaded fnv v1.0.7
  Downloaded regex-syntax v0.8.0
  Downloaded xattr v1.0.1
  Downloaded tokio v1.35.0
  Downloaded log v0.4.20
  Downloaded proc-macro-error-attr v1.0.4
  Downloaded unicode-ident v1.0.12
  Downloaded derive_builder_macro v0.12.0
  Downloaded crypto-common v0.1.6
  Downloaded syn v2.0.38
  Downloaded 70 crates (6.9 MB) in 2m 29s (largest was `linux-raw-sys` at 1.4 MB)
   Compiling proc-macro2 v1.0.69
   Compiling unicode-ident v1.0.12
   Compiling version_check v0.9.4
   Compiling syn v1.0.109
   Compiling strsim v0.10.0
   Compiling typenum v1.17.0
   Compiling generic-array v0.14.7
   Compiling ident_case v1.0.1
   Compiling fnv v1.0.7
   Compiling proc-macro-error-attr v1.0.4
   Compiling libc v0.2.151
   Compiling quote v1.0.33
   Compiling syn v2.0.38
   Compiling proc-macro-error v1.0.4
   Compiling utf8parse v0.2.1
   Compiling serde v1.0.193
   Compiling cfg-if v1.0.0
   Compiling anstyle-parse v0.2.2
   Compiling block-buffer v0.10.4
   Compiling crypto-common v0.1.6
   Compiling anstyle-query v1.0.0
   Compiling anstyle v1.0.4
   Compiling serde_json v1.0.108
   Compiling rustix v0.38.21
   Compiling colorchoice v1.0.0
   Compiling thiserror v1.0.50
   Compiling async-trait v0.1.73
   Compiling memchr v2.6.4
   Compiling aho-corasick v1.1.2
   Compiling anstream v0.6.4
   Compiling digest v0.10.7
   Compiling ryu v1.0.15
   Compiling cpufeatures v0.2.9
   Compiling itoa v1.0.9
   Compiling clap_lex v0.6.0
   Compiling anyhow v1.0.75
   Compiling bytes v1.5.0
   Compiling linux-raw-sys v0.4.10
   Compiling bitflags v2.4.0
   Compiling regex-syntax v0.8.0
   Compiling heck v0.4.1
   Compiling pin-project-lite v0.2.13
   Compiling tokio v1.35.0
   Compiling regex-automata v0.4.1
   Compiling clap_builder v4.4.9
   Compiling sha2 v0.10.8
   Compiling xattr v1.0.1
   Compiling filetime v0.2.22
   Compiling hex v0.4.3
   Compiling log v0.4.20
   Compiling tar v0.4.40
   Compiling is-terminal v0.4.9
   Compiling regex v1.10.0
   Compiling humantime v2.1.0
   Compiling termcolor v1.3.0
   Compiling env_logger v0.10.1
   Compiling darling_core v0.14.4
   Compiling serde_derive v1.0.193
   Compiling thiserror-impl v1.0.50
   Compiling clap_derive v4.4.7
   Compiling sha256 v1.4.0
   Compiling clap v4.4.10
   Compiling darling_macro v0.14.4
   Compiling getset v0.1.2
   Compiling darling v0.14.4
   Compiling derive_builder_core v0.12.0
   Compiling derive_builder_macro v0.12.0
   Compiling derive_builder v0.12.0
   Compiling oci-spec v0.6.4
   Compiling oci-tar-builder v0.3.1 (/tmp/runwasi/crates/oci-tar-builder)
   Compiling wasi-demo-app v0.3.1 (/tmp/runwasi/crates/wasi-demo-app)
    Finished dev [unoptimized + debuginfo] target(s) in 21m 24s
make[1]: Leaving directory '/tmp/runwasi'
[ -f /tmp/runwasi/dist/img.tar ] || cp target/wasm32-wasi/debug/img.tar "dist/img.tar"
sudo ctr -n k8s.io image import --all-platforms dist/img.tar
[sudo] password for necas:
unpacking ghcr.io/containerd/runwasi/wasi-demo-app:latest (sha256:bdd2e5e34cb62837c7d8396017a474afc7adc4a196ff43e620c07999b80e2767)...done
E1213 06:36:02.809060   49586 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unavailable desc = failed to start sandbox \"dd2580d510c0a9ccf37e3b0b2ba25a13d4c94357029dfd740127d51a5ad28ae8\": error reading from server: EOF: unavailable"
FATA[0000] running container: run pod sandbox: rpc error: code = Unavailable desc = failed to start sandbox "dd2580d510c0a9ccf37e3b0b2ba25a13d4c94357029dfd740127d51a5ad28ae8": error reading from server: EOF: unavailable

kuasar crashed with log
terminate called after throwing an instance of 'std::out_of_range' what(): bitset::reset: __position (which is 1) >= _Nb (which is 1)

What did you expect to happen?

script execute without error message

How can we reproduce it (as minimally and precisely as possible)?

  1. download and install Kuasar
wget https://github.com/kuasar-io/kuasar/releases/download/v0.3.0/kuasar-v0.3.0-linux-amd64.tar.gz  
tar xzvf kuasar-v0.3.0-linux-amd64.tar.gz  
mkdir -p /var/lib/kuasar
cp kuasar.img vmlinux.bin config_clh.toml /var/lib/kuasar

2.start Kuasar
nohup wasm-sandboxer --listen /run/wasm-sandboxer.sock --dir /run/kuasar-wasm &

3.start Containerd
ENABLE_CRI_SANDBOXES=1 containerd -c config.toml

4.run the example script"run_example_wasm_container.sh"

Anything else we need to know?

No response

Dev environment

WasmEdge: v0.11.2
Kuasar: v0.3.0-linux-amd64 and v0.4.0-linux-amd64

Support to run sandboxer process as a systemd service

What would you like to be added?

Provide a systemd service template file for sandboxer service, which should be launched after container engine service(like containerd.service or isulad.service).

Why is this needed?

If the sandboxer process is executed as a systemd service, its log can be gathered through the systemd-journald.service. Additionally, we can configure the sandboxer process service restart policy, which is necessary for a system-wide service.

Provide developers with a simple and automatic method to tailor the kernel

What would you like to be added?

Provide developers with a simple and automatic method to tailor the kernel on demand to meet the different scenarios of kuasar secure container.

Why is this needed?

Reduce the difficulty of developers in tailoring the kernel and decrease the memory footprint of tailored kernel.

Support configure the pcie_root_ports number for StratoVirt hypervisor

What would you like to be added?

Support the following config field in the config_stratovirt.toml file

...
[hypervisor]
...
pcie_root_ports = 15
...

Why is this needed?

Currently, StratoVirt virt type machine only supports hotplug pci/pcie devices in the pcie-root-port device, so we need to add the specified number pcie-root-port devices configuration into the stratovirt process command line.

Inorder to add more flexibility for supporting configure the number of pcie-root-port devices for VM, add the new configuration field in the toml file for StratoVirt hypervisor.

vmm sandboxer example cannot work for me

What happened?

Hi developers,

I followd “Quick start” doc:
https://github.com/kuasar-io/kuasar#quick-start
to play the vmm sandboxer, and got stumbled at the last steps. Please help to debug the issue:-)

crictl -D -r unix:///run/containerd/containerd.sock run --runtime=vmm container.json pod.json
failed at starting containers:

DEBU[0000] get image connection
DEBU[0000] get runtime connection
DEBU[0000] RunPodSandboxRequest: &RunPodSandboxRequest{Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:yc-test-sandbox-4,Uid:,Namespace:default,Att
empt:0,},Hostname:,LogDirectory:/tmp,DnsConfig:nil,PortMappings:[]*PortMapping{},Labels:map[string]string{},Annotations:map[string]string{},Linux:&LinuxPodSan
dboxConfig{CgroupParent:,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOpt
ions:nil,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,Seccomp:nil,Apparmo
r:nil,},Sysctls:map[string]string{},Overhead:nil,Resources:nil,},Windows:nil,},RuntimeHandler:vmm,}
DEBU[0000] RunPodSandboxResponse: 9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd
DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:ubuntu:latest,Annotations:map[string]string{},},Aut
h:nil,SandboxConfig:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:yc-test-sandbox-4,Uid:,Namespace:default,Attempt:0,},Hostname:,LogDirectory:/tmp,DnsCo
nfig:nil,PortMappings:[]*PortMapping{},Labels:map[string]string{},Annotations:map[string]string{},Linux:&LinuxPodSandboxConfig{CgroupParent:,SecurityContext:&
LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},SelinuxOptions:nil,RunAsUser:ni
l,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,Seccomp:nil,Apparmor:nil,},Sysctls:map[string]string{},Overhe
ad:nil,Resources:nil,},Windows:nil,},}
DEBU[0015] PullImageResponse: &PullImageResponse{ImageRef:sha256:ba6acccedd2923aee4c2acc6a23780b14ed4b8a5fa4e14e252a23b846df9b6c1,}
DEBU[0015] CreateContainerRequest: &CreateContainerRequest{PodSandboxId:9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd,Config:&ContainerConf
ig{Metadata:&ContainerMetadata{Name:ubuntu-1,Attempt:0,},Image:&ImageSpec{Image:ubuntu:latest,Annotations:map[string]stri
ng{},},Command:[/bin/sh -c while true; do echo `date`; sleep 1; done],Args:[],WorkingDir:,Envs:[]*KeyValue{},Mounts:[]*Mount{},Devices:[]*Device{},Labels:map[
string]string{},Annotations:map[string]string{},LogPath:ubuntu.log,Stdin:false,StdinOnce:false,Tty:false,Linux:&LinuxContainerConfig{Resources:nil,SecurityCon
text:&LinuxContainerSecurityContext{Capabilities:nil,Privileged:false,NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOpt
ions:nil,},SelinuxOptions:nil,RunAsUser:nil,RunAsUsername:,ReadonlyRootfs:false,SupplementalGroups:[],ApparmorProfile:,SeccompProfilePath:,NoNewPrivs:false,Ru
nAsGroup:nil,MaskedPaths:[],ReadonlyPaths:[],Seccomp:nil,Apparmor:nil,},},Windows:nil,CDIDevices:[]*CDIDevice{},},SandboxConfig:&PodSandboxConfig{Metadata:&Po
dSandboxMetadata{Name:yc-test-sandbox-4,Uid:,Namespace:default,Attempt:0,},Hostname:,LogDirectory:/tmp,DnsConfig:nil,PortMappings:[]*PortMapping{},Labels:map[
string]string{},Annotations:map[string]string{},Linux:&LinuxPodSandboxConfig{CgroupParent:,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&Name
spaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privi
leged:false,SeccompProfilePath:,RunAsGroup:nil,Seccomp:nil,Apparmor:nil,},Sysctls:map[string]string{},Overhead:nil,Resources:nil,},Windows:nil,},}

DEBU[0015] CreateContainerResponse: ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20248845
E0616 09:00:55.312186  235653 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadl
ine exceeded" containerID="ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20248845"
FATA[0017] running container: starting the container "ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20248845": rpc error: code = DeadlineExceeded de
sc = context deadline exceeded

vmm-sandboxer output: (not full debug log, only key infos)
RUST_LOG="debug" /root/kuasar/vmm/sandbox/target/debug/vmm-sandboxer --listen /run/vmm-sandboxer.sock --dir /run/kuasar-vmm

[2023-06-16T01:00:37.850113Z DEBUG vmm_sandboxer::cloud_hypervisor] start virtiofsd with cmdline: Command { std: "/usr/local/bin/virtiofsd" "--log-level" "deb
ug" "--cache" "never" "--thread-pool-size" "4" "--socket-path" "/run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd/virtiofs.sock
" "--shared-dir" "/run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd" "--sandbox" "none", kill_on_drop: false }
[2023-06-16T01:00:37.850516Z DEBUG vmm_sandboxer::cloud_hypervisor] start cloud hypervisor with cmdline: Command { std: "/usr/local/bin/cloud-hypervisor" "--a
pi-socket" "/run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd/api.sock" "--cpus" "boot=1,affinity=,features=" "--memory" "size=
1073741824,shared=on,hugepages=off" "--kernel" "/var/lib/kuasar/vmlinux.bin" "--cmdline" "console=hvc0 root=/dev/pmem0p1 rootflags=data=ordered,errors=remount
-ro ro rootfstype=ext4 task.sharefs_type=virtiofs  task.log_level=debug" "--pmem" "id=rootfs,file=/var/lib/kuasar/kuasar.img,discard_writes=on" "--rng" "src=/
dev/urandom" "--vsock" "cid=3,socket=/run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd/task.vsock,id=vsock" "--console" "file=/
tmp/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd-task.log" "--fs" "tag=kuasar,socket=/run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c
53b1c7fee69dfd33effe969acdd/virtiofs.sock,id=fs" "-v", kill_on_drop: false }
[2023-06-16T01:00:37.853301Z DEBUG vmm_sandboxer::utils] virtiofsd: [2023-06-16T01:00:37Z INFO  virtiofsd] Waiting for vhost-user socket connection...
[2023-06-16T01:00:37.872178Z DEBUG vmm_sandboxer::utils] virtiofsd: [2023-06-16T01:00:37Z INFO  virtiofsd] Client connected, servicing requests
[2023-06-16T01:00:37.872669Z DEBUG vmm_sandboxer::utils] cloud-hypervisor: cloud-hypervisor: 1.320936ms: <vmm> INFO:vmm/src/lib.rs:1858 -- API request event:
VmCreate(Mutex { data: VmConfig { cpus: CpusConfig { boot_vcpus: 1, max_vcpus: 1, topology: None, kvm_hyperv: false, max_phys_bits: 46, affinity: None, featur
es: CpuFeatures { amx: false } }, memory: MemoryConfig { size: 1073741824, mergeable: false, hotplug_method: Acpi, hotplug_size: None, hotplugged_size: None,
shared: true, hugepages: false, hugepage_size: None, prefault: false, zones: None, thp: true }, payload: Some(PayloadConfig { firmware: None, kernel: Some("/v
ar/lib/kuasar/vmlinux.bin"), cmdline: Some("console=hvc0 root=/dev/pmem0p1 rootflags=data=ordered,errors=remount-ro ro rootfstype=ext4 task.sharefs_type=virti
ofs  task.log_level=debug"), initramfs: None }), disks: None, net: None, rng: RngConfig { src: "/dev/urandom", iommu: false }, balloon: None, fs: Some([FsConf
ig { tag: "kuasar", socket: "/run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd/virtiofs.sock", num_queues: 1, queue_size: 1024,
 id: Some("fs"), pci_segment: 0 }]), pmem: Some([PmemConfig { file: "/var/lib/kuasar/kuasar.img", size: None, iommu: false, discard_writes: true, id: Some("ro
otfs"), pci_segment: 0 }]), serial: ConsoleConfig { file: None, mode: Null, iommu: false }, console: ConsoleConfig { file: Some("/tmp/9115cd3cf5d02518994f1e8a
f0c5cbe61e07c53b1c7fee69dfd33effe969acdd-task.log"), mode: File, iommu: false }, devices: None, user_devices: None, vdpa: None, vsock: Some(VsockConfig { cid:
 3, socket: "/run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd/task.vsock", iommu: false, id: Some("vsock"), pci_segment: 0 }),
 iommu: false, sgx_epc: None, numa: None, watchdog: false, platform: None, tpm: None, preserved_fds: None }, poisoned: false, .. }, Sender { .. })
[2023-06-16T01:00:37.872713Z DEBUG vmm_sandboxer::utils] cloud-hypervisor: cloud-hypervisor: 1.541828ms: <vmm> INFO:vmm/src/lib.rs:1858 -- API request event:
VmBoot(Sender { .. })

...

[2023-06-16T01:00:53.312823Z INFO  containerd_sandbox::rpc] append a container ContainerData { id: "ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20
...

[2023-06-16T01:00:53.313219Z DEBUG vmm_sandboxer::storage] attach storage for mount Mount { destination: "", type: "overlay", source: "overlay", options: ["index=off", "workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15/work", "upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15/fs", "lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/14/fs"] } with id storage1
[2023-06-16T01:00:53.314061Z DEBUG vmm_sandboxer::storage] overlay mount storage for Mount { destination: "", type: "overlay", source: "overlay", options: ["index=off", "workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15/work", "upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/15/fs", "lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/14/fs"] }, dest: /run/kuasar-vmm/9115cd3cf5d02518994f1e8af0c5cbe61e07c53b1c7fee69dfd33effe969acdd/storage1
[2023-06-16T01:00:55.311403Z DEBUG h2::codec::framed_read] received frame=Reset { stream_id: StreamId(57), error_code: CANCEL }
>>> no more output

containerd:(I added some debug log in container create path):

time="2023-06-16T09:00:53.310418772+08:00" level=info msg="CreateContainer within sandbox I
...

time="2023-06-16T09:00:53.311098015+08:00" level=info msg="container \"ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20248845\": runtime handler: vmm"
time="2023-06-16T09:00:53.311119792+08:00" level=info msg="container \"ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20248845\": oci runtime: {io.containerd.kuasar.v1   [] []  map[] false false   0  vmm hvsock}"


time="2023-06-16T09:00:53.311135256+08:00" level=info msg="container \"ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20248845\": taskOpts: []"
time="2023-06-16T09:00:53.311171785+08:00" level=debug msg="Start writing stream \"stdout\" to log file \"/tmp/ubuntu.log\""
time="2023-06-16T09:00:53.311185065+08:00" level=debug msg="Start writing stream \"stderr\" to log file \"/tmp/ubuntu.log\""
time="2023-06-16T09:00:53.332690726+08:00" level=debug msg="garbage collected" d="892.277µs"

time="2023-06-16T09:00:55.313083545+08:00" level=error msg="StartContainer for \"ccf4f09edade95a37a762b30d8ac3e4f62784092038346deb98bbf2e20248845\" failed" error="rpc error: code = DeadlineExceeded desc = failed to create containerd task: context deadline exceeded"

What did you expect to happen?

examples/run_example_container.sh vmm

vmm examples works, container task gets running!

How can we reproduce it (as minimally and precisely as possible)?

follow:
https://github.com/kuasar-io/kuasar#quick-start

Anything else we need to know?

No response

Dev environment

No response

Some sandboxer can only run on bare metal, this should be highlighted in the doc

What happened?

New comers follow the doc to try to install and start kuasar container, and find that it can not started in the vm at last.

What did you expect to happen?

We should highlight the requirement of hardware in the doc to avoid it.

How can we reproduce it (as minimally and precisely as possible)?

yes

Anything else we need to know?

No response

Dev environment

No response

vscode with rust-analyzer cannot work for vmm/sandbox crate

Hi,

What happened?

As the subject, in my dev setup, rust-analyzer works well for crates (i mean subproject with individual Cargo.toml), except vmm/sandbox crate.

Does anyone see the same problem? My guess is that, it's related to the deeper directory struct of vmm/sandbox.

What did you expect to happen?

code in vmm/sandbox subdir can works for rust-analyzer as other crates.

How can we reproduce it (as minimally and precisely as possible)?

with vscode + rust-analyzer, try goto definition.

To make RA know all crates, add workspaces under root dir, ref to:
https://www.reddit.com/r/rust/comments/ikzbdj/vscode_multiple_project_roots_rustanalyzer_clippy/

Cargo.toml in root dir:

[workspace]

members = [
    "vmm/common",
    "vmm/sandbox",
    "vmm/task",
    "shim",
    "quark",
]

Anything else we need to know?

No response

Dev environment

No response

Enhancement of CI lints

What would you like to be added?

More warning checks of Clippy linter.

Why is this needed?

Clippy is a collection of lints to catch common mistakes and improve Rust code. So we need more strict(warning level) lint rules.

fix cargo test compilation warnings

What should be cleaned up or changed?

The result of the cargo test command includes some compilation warnings that need to be cleaned.

Why is this needed?

Make the code includes test code more clean.

Support send startup params to cloud-hypervisor through api-socket

What would you like to be added?

Support run cloud-hypervisor with minimal cmdline param, like "api-socket ", then use api-socket request to send startup params.

Why is this needed?

  1. cloud-hypervisor community has changed the format of command-line options several times in recent releases. Affected Kuasar cmdline parameters implementations.
  2. pre-starting cloud-hypervisor process, may improve the startup performance.

Are there plans to add support for Wasmtime?

What would you like to be added?

The readme mentions both Wasmtime and Wasmedge initially in wasm sections, but then mentions that only Wasmedge is currently supported. Are there plans to add support for Wasmtime?

Why is this needed?

Wasmtime is a popular wasm engine with 12K starts on Github vs 6K for Wasmedge. It is backed by Bytecode Alliance, which consists of many organizations. Adding support for Wasmtime should allow for easier adoption of Kuasar.

vmm-task is built explicitly with x86_64-unknown-linux-musl in Makefile

What happened?

Failed to build with arm64 machine. However, I noticed that arm will be supported in 2024, but explicitly specify the x86 target in the Makefile is not a good manner

What did you expect to happen?

Built in arm64

How can we reproduce it (as minimally and precisely as possible)?

Build with arm machine

Anything else we need to know?

No response

Dev environment

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.