Giter Club home page Giter Club logo

kwok's Introduction

KWOK (Kubernetes WithOut Kubelet)

KWOK is pronounced as /kwΙ”k/.

KWOK is a toolkit that enables setting up a cluster of thousands of Nodes in seconds. Under the scene, all Nodes are simulated to behave like real ones, so the overall approach employs a pretty low resource footprint that you can easily play around on your laptop.

What is KWOK?

KWOK stands for Kubernetes WithOut Kubelet. So far, it provides two tools:

  • kwok is the cornerstone of this project, responsible for simulating the lifecycle of fake nodes, pods, and other Kubernetes API resources.
  • kwokctl is a CLI tool designed to streamline the creation and management of clusters, with nodes simulated by kwok.

Please see our website for more in-depth information.

Why KWOK?

  • Lightweight: You can simulate thousands of nodes on your laptop without significant consumption of CPU or memory resources. Currently, KWOK can reliably maintain 1k nodes and 100k pods easily.
  • Fast: You can create and delete clusters and nodes almost instantly, without waiting for boot or provisioning. Currently, KWOK can create 20 nodes or pods per second.
  • Compatibility: KWOK works with any tools or clients that are compliant with Kubernetes APIs, such as kubectl, helm, kui, etc.
  • Portability: KWOK has no specific hardware or software requirements. You can run it using pre-built images, once Docker or Nerdctl is installed. Alternatively, binaries are also available for all platforms and can be easily installed.
  • Flexibility: You can configure different node types, labels, taints, capacities, conditions, etc., and you can configure different pod behaviors, status, etc. to test different scenarios and edge cases.

Community

See our own contributor guide and the Kubernetes community page.

Getting Involved

If you're interested in participating in future discussions or development related to KWOK, there are several ways to get involved:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

kwok's People

Contributors

actions-user avatar caozhuozi avatar carlory avatar dependabot[bot] avatar fish-pro avatar fuweid avatar garrybest avatar hezhizhen avatar huang-wei avatar jarhmj avatar joeyyy09 avatar k8s-ci-robot avatar lianghao208 avatar mohamedasifs123 avatar muma378 avatar neerajnagure avatar network-charles avatar nikola-jokic avatar qingwave avatar sologgfun avatar songjoy avatar sunya-ch avatar usernameisnull avatar windsonsea avatar wlp1153468871 avatar wzshiming avatar yuanchen8911 avatar yulng avatar zhuzhenghao avatar zwpaper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kwok's Issues

[kwok] Provide Kubelet metrics

What would you like to be added?

Provides generic functions that enable it to simulate data for any indicator.

kind: Metric
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: node-name
spec:
  # I'm not sure if there is a way to configure the `metric-server` to change the path of the node metrics collection
  path: "/metrics/nodes/{{ .metadata.name }}"
  metrics:
    - name: kubelet_node_name
      help: [ALPHA] The node's name. The count is always 1.
      kind: gauge
      labels:
        - name: node
          value: 'node.metadata.name'
      value: '1'
    - name: kubelet_started_containers_total
      # For special cases, we can also consider providing special functions to provide data,
      # such as this number of containers started.
      help: [ALPHA] Cumulative number of containers started
      kind: counter
      value: 'startedContainersTotal( node.metadata.name )'
    - name: kubelet_pleg_relist_duration_seconds
      help: [ALPHA] Duration in seconds of a single pod list and pod events list call.
      kind: histogram
      buckets:
        - le: 0.005
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 10'
        - le: 0.01
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 10'
        - le: 0.025
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 9'
        - le: 0.05
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 9'
        - le: 0.1
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 8'
        - le: 0.25
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 8'
        - le: 0.5
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 7'
        - le: 1
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 7'
        - le: 2.5
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 6'
        - le: 5
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 6'
        - le: 10
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 5'
        - le: +Inf
          value: '( unixSecond(now()) - unixSecond(node.metadata.creationTimestamp) ) / 4'

Describing values using CEL expressions

Why is this needed?

#159 (reply in thread)

Logger format enhancement

What would you like to be added?

In the kwok, print log just like follows:

WARN Text logger                                                               cluster=kwok key1=val1 key2=key3

There is a long space between msg and tags, it's not very clear from my side.

The code in logger shows:

_, err = fmt.Fprintf(c.output, "%s%*s\n", msg, termWidth-msgWidth, attrsStr)

Why we print the log length must equal the termail width?

Why is this needed?

Print more clear log.

Failing `kwokctl snapshot` with runtime `nerdctl`

Which jobs are failing?

- name: Test Snapshot
if: ${{ matrix.kwokctl-runtime != 'nerdctl' }} # TODO: Let the test pass with nerdctl
shell: bash
run: |
./hack/e2e-test.sh kwokctl/kwokctl_${{ matrix.kwokctl-runtime }}_snapshot

Which tests are failing?

https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840

================================================================================
Testing kwokctl/kwokctl_nerdctl_snapshot...
github.com/docker/buildx 0.[9](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:10).1+azure-2 ed00243a0ce2a0aee75311b06e32d33b44729689
mkdir: cannot create directory β€˜tmp’: File exists
unpacking docker.io/local/kwok:test (sha256:5019cea8f3dd71453e676622dd7825d12c2e7ae4272560aa295d7787dfbf0245)...
Loaded image: docker.io/local/kwok:testTest snapshot on nerdctl for 1.25.3 1.24.7 1.23.13 1.22.15 1.21.14 1.20.15
------------------------------
Testing snapshot on nerdctl for 1.25.3
{"time":"2022-11-28T14:50:53.869177459Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Creating cluster","cluster":"snapshot-cluster-nerdctl-1-25-3"}
{"time":"2022-11-28T14:50:54.550340799Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Starting cluster","cluster":"snapshot-cluster-nerdctl-1-25-3"}
time="2022-11-28T14:50:54Z" level=info msg="Creating network kwok-snapshot-cluster-nerdctl-1-25-3"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: service kube_apiserver: [Links]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: service kwok_controller: [Links]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: service kube_controller_manager: [Links]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: service kube_scheduler: [Links]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:50:54Z" level=info msg="Ensuring image registry.k8s.io/etcd:3.5.6-0"
time="2022-11-28T14:50:54Z" level=info msg="Ensuring image registry.k8s.io/kube-apiserver:v1.25.3"
time="2022-11-28T14:50:54Z" level=info msg="Ensuring image local/kwok:test"
time="2022-11-28T14:50:54Z" level=info msg="Ensuring image registry.k8s.io/kube-controller-manager:v1.25.3"
time="2022-11-28T14:50:54Z" level=info msg="Ensuring image registry.k8s.io/kube-scheduler:v1.25.3"
time="2022-11-28T14:50:54Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-25-3-kwok-controller"
time="2022-11-28T14:50:54Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-25-3-kube-scheduler"
time="2022-11-28T14:50:54Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-25-3-etcd"
time="2022-11-28T14:50:54Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-25-3-kube-controller-manager"
time="2022-11-28T14:50:54Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-25-3-kube-apiserver"
You can now use your cluster with:
{"time":"2022-11-28T14:50:59.168789831Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Cluster is ready","cluster":"snapshot-cluster-nerdctl-1-25-3"}

    kubectl config use-context kwok-snapshot-cluster-nerdctl-1-25-3

Thanks for using kwok!
No resources found
No resources found
deployment.apps/fake-pod created
node/fake-node created
Download https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
{"time":"2022-11-28T14:51:37.882622576Z","level":"ERROR","source":"/opt/hostedtoolcache/go/1.19.3/x64/src/runtime/proc.go:250","msg":"Execute exit","err":"nerdctl cp /home/runner/.kwok/clusters/snapshot-cluster-nerdctl-1-25-3/etcd-data kwok-snapshot-cluster-nerdctl-1-25-3-etcd:/: exit status 1\ntime=\"2022-11-28T14:51:37Z\" level=warning msg=\"failed to inspect NetNS\" error=\"failed to Statfs \\\"/proc/14518/ns/net\\\": no such file or directory\" id=2af2754d12fd763d179bf1e3225b62a7ea3d5a86a37eabc1bb29944a04c8ce85\ntime=\"2022-11-28T14:51:37Z\" level=fatal msg=\"expected container status running, got stopped\"\n"}
Error: Empty snapshot restore failed
Expected: 
Actual: NAMESPACE NAME
default fake-pod-78f5b8f676-gj6r9
NAME
fake-node
{"time":"2022-11-28T14:52:34.878035422Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Stopping cluster","cluster":"snapshot-cluster-nerdctl-1-25-3"}
time="2022-11-28T14:52:34Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-25-3-kube-scheduler"
time="2022-11-28T14:52:35Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-25-3-kube-controller-manager"
time="2022-11-28T14:52:35Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-25-3-kwok-controller"
time="2022-11-28T14:52:35Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-25-3-kube-apiserver"
time="2022-11-28T14:52:35Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-25-3-etcd"
time="2022-11-28T14:52:35Z" level=info msg="Removing network kwok-snapshot-cluster-nerdctl-1-25-3"
{"time":"2022-11-28T14:52:35.731964445Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Deleting cluster","cluster":"snapshot-cluster-nerdctl-1-25-3"}
{"time":"2022-11-28T14:52:35.882827316Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Cluster deleted","cluster":"snapshot-cluster-nerdctl-1-25-3"}
------------------------------
Testing snapshot on nerdctl for 1.24.7
{"time":"2022-11-28T14:52:35.899447477Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Creating cluster","cluster":"snapshot-cluster-nerdctl-1-24-7"}
{"time":"2022-11-28T14:52:37.003300129Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Starting cluster","cluster":"snapshot-cluster-nerdctl-1-24-7"}
time="2022-11-28T14:52:37Z" level=info msg="Creating network kwok-snapshot-cluster-nerdctl-1-24-7"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: service kube_apiserver: [Links]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: service kube_controller_manager: [Links]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: service kube_scheduler: [Links]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: service kwok_controller: [Links]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:52:37Z" level=info msg="Ensuring image registry.k8s.io/etcd:3.5.6-0"
time="2022-11-28T14:52:37Z" level=info msg="Ensuring image registry.k8s.io/kube-apiserver:v1.24.7"
time="2022-11-28T14:52:37Z" level=info msg="Ensuring image registry.k8s.io/kube-controller-manager:v1.24.7"
time="2022-11-28T14:52:37Z" level=info msg="Ensuring image registry.k8s.io/kube-scheduler:v1.24.7"
time="2022-11-28T14:52:37Z" level=info msg="Ensuring image local/kwok:test"
time="2022-11-28T14:52:37Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-24-7-kube-scheduler"
time="2022-11-28T14:52:37Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-24-7-kube-controller-manager"
time="2022-11-28T14:52:37Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-24-7-etcd"
time="2022-11-28T14:52:37Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-24-7-kwok-controller"
time="2022-11-28T14:52:37Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-24-7-kube-apiserver"
You can now use your cluster with:
{"time":"2022-11-28T14:52:41.660050875Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Cluster is ready","cluster":"snapshot-cluster-nerdctl-1-24-7"}

    kubectl config use-context kwok-snapshot-cluster-nerdctl-1-24-7

Thanks for using kwok!
No resources found
No resources found
deployment.apps/fake-pod created
node/fake-node created
{"time":"2022-11-28T14:53:19.502931142Z","level":"ERROR","source":"/opt/hostedtoolcache/go/1.19.3/x64/src/runtime/proc.go:250","msg":"Execute exit","err":"nerdctl cp /home/runner/.kwok/clusters/snapshot-cluster-nerdctl-1-24-7/etcd-data kwok-snapshot-cluster-nerdctl-1-24-7-etcd:/: exit status 1\ntime=\"2022-11-28T14:53:19Z\" level=warning msg=\"failed to inspect NetNS\" error=\"failed to Statfs \\\"/proc/18073/ns/net\\\": no such file or directory\" id=4fb042bd8586bcd039dac994d5262e857ed2631ddcc8509367aa5ac50201a179\ntime=\"2022-11-28T14:53:19Z\" level=fatal msg=\"expected container status running, got stopped\"\n"}
Error: Empty snapshot restore failed
Expected: 
Actual: NAMESPACE NAME
default fake-pod-6f5fffcbc-hff5p
NAME
fake-node
{"time":"2022-11-28T14:54:16.753867524Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Stopping cluster","cluster":"snapshot-cluster-nerdctl-1-24-7"}
time="2022-11-28T14:54:16Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-24-7-kwok-controller"
time="2022-11-28T14:54:16Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-24-7-kube-scheduler"
time="2022-11-28T14:54:17Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-24-7-kube-controller-manager"
time="2022-11-28T14:54:17Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-24-7-kube-apiserver"
time="2022-11-28T14:54:17Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-24-7-etcd"
time="2022-11-28T14:54:17Z" level=info msg="Removing network kwok-snapshot-cluster-nerdctl-1-24-7"
{"time":"2022-11-28T14:54:17.535599245Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Deleting cluster","cluster":"snapshot-cluster-nerdctl-1-24-7"}
{"time":"2022-11-28T14:54:17.677782486Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Cluster deleted","cluster":"snapshot-cluster-nerdctl-1-24-7"}
------------------------------
Testing snapshot on nerdctl for 1.23.13
{"time":"2022-11-28T14:54:17.692811822Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Creating cluster","cluster":"snapshot-cluster-nerdctl-1-23-13"}
{"time":"2022-11-28T14:54:18.561574115Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Starting cluster","cluster":"snapshot-cluster-nerdctl-1-23-13"}
time="2022-11-28T14:54:18Z" level=info msg="Creating network kwok-snapshot-cluster-nerdctl-1-23-13"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: service kube_apiserver: [Links]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: service kwok_controller: [Links]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: service kube_controller_manager: [Links]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: service kube_scheduler: [Links]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:54:18Z" level=info msg="Ensuring image registry.k8s.io/etcd:3.5.6-0"
time="2022-11-28T14:54:18Z" level=info msg="Ensuring image registry.k8s.io/kube-apiserver:v1.23.13"
time="2022-11-28T14:54:18Z" level=info msg="Ensuring image local/kwok:test"
time="2022-11-28T14:54:18Z" level=info msg="Ensuring image registry.k8s.io/kube-controller-manager:v1.23.13"
time="2022-11-28T14:54:18Z" level=info msg="Ensuring image registry.k8s.io/kube-scheduler:v1.23.13"
time="2022-11-28T14:54:18Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-23-13-kube-scheduler"
time="2022-11-28T14:54:18Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-23-13-etcd"
time="2022-11-28T14:54:18Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-23-13-kube-controller-manager"
time="2022-11-28T14:54:18Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-23-13-kube-apiserver"
time="2022-11-28T14:54:18Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-23-13-kwok-controller"
You can now use your cluster with:
{"time":"2022-11-28T14:54:22.92009441Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Cluster is ready","cluster":"snapshot-cluster-nerdctl-1-23-13"}

    kubectl config use-context kwok-snapshot-cluster-nerdctl-1-23-13

Thanks for using kwok!
No resources found
No resources found
deployment.apps/fake-pod created
node/fake-node created
{"time":"2022-11-28T14:55:00.555011[10](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:11)1Z","level":"ERROR","source":"/opt/hostedtoolcache/go/1.19.3/x64/src/runtime/proc.go:250","msg":"Execute exit","err":"nerdctl cp /home/runner/.kwok/clusters/snapshot-cluster-nerdctl-1-23-13/etcd-data kwok-snapshot-cluster-nerdctl-1-23-13-etcd:/: exit status 1\ntime=\"2022-[11](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:12)-28T14:55:00Z\" level=warning msg=\"failed to inspect NetNS\" error=\"failed to Statfs \\\"/proc/21546/ns/net\\\": no such file or directory\" id=37a024f06534b4c24888948cb02592bfbf348a86def52b0ef420c9b2278c5e95\ntime=\"2022-11-28T14:55:00Z\" level=fatal msg=\"expected container status running, got stopped\"\n"}
Error: Empty snapshot restore failed
Expected: 
Actual: NAMESPACE NAME
default fake-pod-6bf4bdd9cc-5jqlf
NAME
fake-node
{"time":"2022-11-28T14:55:57.588653024Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Stopping cluster","cluster":"snapshot-cluster-nerdctl-1-23-13"}
time="2022-11-28T14:55:57Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-23-13-kwok-controller"
time="2022-11-28T14:55:57Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-23-13-kube-scheduler"
time="2022-11-28T14:55:57Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-23-13-kube-controller-manager"
time="2022-11-28T14:55:58Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-23-13-kube-apiserver"
time="2022-11-28T14:55:58Z" level=info msg="Removing container kwok-snapshot-cluster-nerdctl-1-23-13-etcd"
time="2022-11-28T14:55:58Z" level=info msg="Removing network kwok-snapshot-cluster-nerdctl-1-23-13"
{"time":"2022-11-28T14:55:58.389470291Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Deleting cluster","cluster":"snapshot-cluster-nerdctl-1-23-13"}
{"time":"2022-11-28T14:55:58.526502308Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:45","msg":"Cluster deleted","cluster":"snapshot-cluster-nerdctl-1-23-13"}
------------------------------
Testing snapshot on nerdctl for 1.22.15
{"time":"2022-11-28T14:55:58.545860907Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Creating cluster","cluster":"snapshot-cluster-nerdctl-1-22-15"}
{"time":"2022-11-28T14:55:59.42[12](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:13)21125Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Starting cluster","cluster":"snapshot-cluster-nerdctl-1-22-15"}
time="2022-11-28T[14](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:15):55:59Z" level=info msg="Creating network kwok-snapshot-cluster-nerdctl-1-22-[15](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:16)"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: service kube_apiserver: [Links]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: service kube_controller_manager: [Links]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: service kube_scheduler: [Links]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: service kwok_controller: [Links]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-11-28T14:55:59Z" level=info msg="Ensuring image registry.k8s.io/etcd:3.5.6-0"
time="2022-11-28T14:55:59Z" level=info msg="Ensuring image registry.k8s.io/kube-apiserver:v1.22.15"
time="2022-11-28T14:55:59Z" level=info msg="Ensuring image registry.k8s.io/kube-controller-manager:v1.22.15"
time="2022-11-28T14:55:59Z" level=info msg="Ensuring image registry.k8s.io/kube-scheduler:v1.22.15"
time="2022-11-28T14:55:59Z" level=info msg="Ensuring image local/kwok:test"
time="2022-11-28T14:55:59Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-22-15-kube-controller-manager"
time="2022-11-28T14:55:59Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-22-15-kwok-controller"
time="2022-11-28T14:55:59Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-22-15-kube-scheduler"
time="2022-11-28T14:55:59Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-22-15-etcd"
time="2022-11-28T14:55:59Z" level=info msg="Creating container kwok-snapshot-cluster-nerdctl-1-22-15-kube-apiserver"
You can now use your cluster with:

    kubectl config use-context kwok-snapshot-cluster-nerdctl-1-22-15

Thanks for using kwok!
{"time":"2022-11-28T14:56:03.6658[16](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:17)766Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:75","msg":"Cluster is ready","cluster":"snapshot-cluster-nerdctl-1-22-15"}
No resources found
No resources found
deployment.apps/fake-pod created
node/fake-node created
{"time":"2022-11-28T14:56:41.2365[17](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:18)354Z","level":"ERROR","source":"/opt/hostedtoolcache/go/1.[19](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:20).3/x64/src/runtime/proc.go:250","msg":"Execute exit","err":"nerdctl cp /home/runner/.kwok/clusters/snapshot-cluster-nerdctl-1-22-15/etcd-data kwok-snapshot-cluster-nerdctl-1-22-15-etcd:/: exit status 1\ntime=\"[20](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:21)[22](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:23)-11-28T14:56:40Z\" level=warning msg=\"failed to inspect NetNS\" error=\"failed to Statfs \\\"/proc/25054/ns/net\\\": no such file or directory\" id=8ec[23](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:24)8fa41f2f3a7039bf6e120f2de73ad776cc8c775db43b3301157aa430f[26](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:27)\ntime=\"2022-11-[28](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:29)T14:56:[40](https://github.com/kubernetes-sigs/kwok/actions/runs/3565851048/jobs/5991537840#step:13:41)Z\" level=fatal msg=\"expected container status running, got stopped\"\n"}

Since when has it been failing?

Always

Reason for failure (if possible)

No response

Anything else we need to know?

No response

[kwok] Provided service for `attach`

# Match pods
kind: ClusterAttach
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: cluster-attach-rules
spec:
  selector:
    matchNamespaces:
      - podNamespace
    matchNames:
      - podName
  attaches: []
--- 
# Just for a Pod
kind: Attach
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: podName
  namespace: podNamespace
spec:
  attaches:
    - containers:
        - containerName
      
      logsFile: /tmp/kwok.log

/kind feature

Release v0.1.0

Release Checklist

Changelog

## What's Changed

This version has many changes and is still largely compatible with v0.0.1,
but it is best to recreate the old cluster to avoid unexpected failures.

**Full Changelog**: https://github.com/kubernetes-sigs/kwok/compare/v0.0.1...v0.1.0

### Breaking Changes

- Default version of Kubernetes has been updated to v1.26.0
- Default version of Prometheus has been updated to v2.41.0
- Prefer to use the existing `kubectl` instead of downloading it
- The default secure port is enabled by default for Apiserver greater than v1.12, previously it was v1.18
- Cluster's information `kwok.yaml` has changed significantly
- If the Pod is created by a Job, it will become completed

### New Features

- Common
  - Add website and logo
  - Add shutdown mechanism on SIGTERM+INT
  - Structured logging
  - Add `--version` flag to print version information
  - Add `--config` flag to specify config file to set default values
  - Provide all-in-one cluster images
- Kwok
  - Support Pod/Node lifecycle simulation with `Stage`
  - Add `--experimental-enable-cni` flag to experimentally support managing IP addresses with CNI on Linux.
  - Add Kubelet services stubs, e.g. using `kubectl logs` will return TODO messages instead of errors.
- Kwokctl
  - Add `--kube-authorization` flag to support enabling kube authorization
  - Add `--kube-scheduler-config` flag to support passing kube scheduler config
  - Add `--disable-kube-scheduler` flag to support disabling kube scheduler
  - Add `--disable-kube-controller-manager` flag to support disabling kube controller manager
  - Add subcommand `etcdctl` to run the command in the etcd container
  - Add subcommand `start` and `stop` to start and stop a cluster
  - Download `docker-compose`, `kind`, `etcdctl` binary automatically

## Images

kwok
- registry.k8s.io/kwok/kwok:v0.1.0

cluster
- registry.k8s.io/kwok/cluster:{tag}
  - `v1.26` & `v1.26.0` & `v0.1.0-k8s.v1.26.0`
  - `v1.25` & `v1.25.3` & `v0.1.0-k8s.v1.25.3`
  - `v1.24` & `v1.24.7` & `v0.1.0-k8s.v1.24.7`
  - `v1.23` & `v1.23.13` & `v0.1.0-k8s.v1.23.13`
  - `v1.22` & `v1.22.15` & `v0.1.0-k8s.v1.22.15`
  - `v1.21` & `v1.21.14` & `v0.1.0-k8s.v1.21.14`

## Contributors

Thank you to everyone who contributed to this release! ❀️

Users whose commits are in this release (alphabetically by user name)

- @Fish-pro
- @Garrybest
- @Huang-Wei
- @JarHMJ
- @Songjoy
- @Zhuzhenghao
- @carlory
- @chaunceyjiang
- @hezhizhen
- @lianghao208
- @mengjiao-liu
- @muma378
- @my-git9
- @pacoxu
- @qingwave
- @sologgfun
- @windsonsea
- @wlp1153468871
- @wzshiming
- @yanggangtony
- @yibozhuang
- @zwpaper
 
And thank you very much to everyone else not listed here who contributed in other ways like filing issues,
giving feedback, testing fixes, etc. πŸ™

Artifacts

The current release uses the Github Release Binary and the GCR Image

Release v0.0.1

Release Checklist

Changelog

First release v0.0.1

#29 (comment)

Cannot compile in bash 3.2.x

How to use it?

  • kwok
  • kwokctl --runtime=docker (default runtime)
  • kwokctl --runtime=binary
  • kwokctl --runtime=nerdctl
  • kwokctl --runtime=kind

What happened?

image

What did you expect to happen?

Can compile.

How can we reproduce it (as minimally and precisely as possible)?

  1. Mac m1
  2. GNU bash, version 3.2.57(1)-release-(arm64-apple-darwin22)
  3. make build

Anything else we need to know?

echo "${os,,}"

Only bash version 4.0 or above is supported.

OS version

```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here

On Darwin:

$ uname -a
Darwin chauncey-2.local 22.1.0 Darwin Kernel Version 22.1.0: Sun Oct 9 20:14:30 PDT 2022; root:xnu-8792.41.9~2/RELEASE_ARM64_T8103 arm64

GNU bash, version 3.2.57(1)-release-(arm64-apple-darwin22)

paste output here

On Windows:

C:> wmic os get Caption, Version, BuildNumber, OSArchitecture

paste output here

</details>

Add a configuration

# ~/.kwok/kwok.yml
kind: KwokConfiguration
apiVersion: kwok.x-k8s.io/v1alpha1
spec:
  cidr: ""
  disregardStatusWithAnnotationSelector: ""
  disregardStatusWithLabelSelector: ""
  manageAllNodes: false
  manageNodesWithAnnotationSelector: ""
  manageNodesWithLabelSelector: ""
  nodeIP: ""
# ~/.kwok/kwokctl.yml
kind: KwokctlConfiguration
apiVersion: kwok.x-k8s.io/v1alpha1
spec:
  workdir: ""
  kubeApiserverPort: 0
  runtime: ""
  prometheusPort: 0
  kwokVersion: ""
  kubeVersion: ""
  etcdVersion: ""
  prometheusVersion: ""
  securePort: false
  quietPull: false
  kubeImagePrefix: ""
  etcdImagePrefix: ""
  kwokImagePrefix: ""
  prometheusImagePrefix: ""
  etcdImage: ""
  kubeApiserverImage: ""
  kubeControllerManagerImage: ""
  kubeSchedulerImage: ""
  kwokControllerImage: ""
  prometheusImage: ""
  kindNodeImagePrefix: ""
  kindNodeImage: ""
  kubeBinaryPrefix: ""
  kubeApiserverBinary: ""
  kubeControllerManagerBinary: ""
  kubeSchedulerBinary: ""
  kubectlBinary: ""
  etcdBinaryPrefix: ""
  etcdBinary: ""
  etcdBinaryTar: ""
  kwokBinaryPrefix: ""
  kwokControllerBinary: ""
  prometheusBinaryPrefix: ""
  prometheusBinary: ""
  prometheusBinaryTar: ""
  mode: ""
  kubeFeatureGates: ""
  kubeRuntimeConfig: ""
  kubeAuditPolicy: ""
  kubeAuthorization: ""

Pod lifecycle simulation configuration #67

[kwokctl] Update start/stop for nerdctl

What would you like to be added?

nerdctl 1.2 or later, change it to look like docker. Keep the rest unchanged.

Why is this needed?

// TODO: nerdctl does not support 'compose start' in v1.1.0 or earlier
// Support in https://github.com/containerd/nerdctl/pull/1656 merge into the main branch, but there is no release
subcommand := []string{"start"}
if conf.Options.Runtime == consts.RuntimeTypeNerdctl {
subcommand = []string{"up", "-d"}
}

if conf.Options.Runtime == consts.RuntimeTypeNerdctl {
backupFilename := c.GetWorkdirPath("restart.db")
fi, err := os.Stat(backupFilename)
if err == nil {
if fi.IsDir() {
return fmt.Errorf("wrong backup file %s, it cannot be a directory, please remove it", backupFilename)
}
if err := c.SnapshotRestore(ctx, backupFilename); err != nil {
return fmt.Errorf("failed to restore cluster data: %w", err)
}
if err := os.Remove(backupFilename); err != nil {
return fmt.Errorf("failed to remove backup file: %w", err)
}
} else if !os.IsNotExist(err) {
return err
}
}

// TODO: nerdctl does not support 'compose stop' in v1.0.0 or earlier
subcommand := "stop"
if conf.Options.Runtime == consts.RuntimeTypeNerdctl {
subcommand = "down"
err := c.SnapshotSave(ctx, c.GetWorkdirPath("restart.db"))
if err != nil {
return fmt.Errorf("failed to snapshot cluster data: %w", err)
}
}

[kwok] Provided service for `port-forward`

# Match pods
kind: ClusterPortForward
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: cluster-port-forward-rules
spec:
  selector:
    matchNamespaces:
      - podNamespace
    matchNames:
      - podName
  forwards: []
--- 
# Just for a Pod
kind: PortForward
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: podName
  namespace: podNamespace
spec:
  # This is the list of ports that will be forwarded
  forwards:

    # match the port 8001 forwards to the targetAddress:targetPort
    - ports:
        - 8001
      target:
        port: 80
        address: localhost

    # match the port 8002 forwards with the stdin/stdout of nc
    - ports:
        - 8002
      command:
        - nc
        - localhost
        - '80'

    # match the port 8003 forwards with the stdin/stdout of kubectl
    - ports:
        - 8003
      command:
        - kubectl
        - exec
        - -i
        - -n
        - podNamespace
        - podName
        - -c
        - podContainer
        - --
        - nc
        - localhost
        - '80'

    # match the port 8004 forwards with the stdin/stdout of docker
    - ports:
        - 8004
      command:
        - docker
        - exec
        - -i
        - container
        - nc
        - localhost
        - '80'

    # the ports is not set and will be the default
    - target
        port: 80
        address: localhost

[UMBRELLA] Code Cleanup

cat .golangci.yaml | grep '# - ' | awk '{print $3}'

https://golangci-lint.run/usage/linters

How to do

Install golangci-lint

go install github.com/golangci/golangci-lint/cmd/[email protected]

Uncomment/Add on the checks you want to cleanup

https://github.com/kubernetes-sigs/kwok/blob/main/.golangci.yaml#L3

Check

Run golangci-lint run -c .golangci.yaml based on root of this repo

Clean up

Clean up the code based on the error reported by the check

Example

#108

[kwok] Provided service for `logs`

# Match pods
kind: ClusterLogs
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: cluster-logs-rules
spec:
  selector:
    matchNamespaces:
      - podNamespace
    matchNames:
      - podName
  logs: []
--- 
# Just for a Pod
kind: Logs
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: podName
  namespace: podNamespace
spec:
  logs:
    - containers:
        - containerName

      # The first response is the contents of the logsFile
      # TODO: I'm not sure how to do support the --since-time, --since, --tail, --previous
      logsFile: /tmp/kwok.log

      # Follow up if true
      follow: true

/kind feature

[kwokctl] Support k8s format for snapshot

kwokctl snapshot save --path snapshot.yaml --format=k8s
kwokctl snapshot restore --path snapshot.yaml --format=k8s

Save --format=k8s: means to convert k8s yaml to protobuf format and put it in the etcd database.
Restore --format=k8s: means to get the data in protobuf format from etcd and convert it to yaml.

Non goal: Ability to filtering of namespace, resource, label, etc.

[kwok] Allow for delay & jitter on pod-controller applying Ready condition to pod

/kind feature

For testing monitoring tools, and deployment tools like helm/gitops simulating a large amount of load it's useful to have kwok set up in a cluster. An issue with that is kwok is moving pods to Ready too fast to simulate a realistic usecase.

Would it be ok to add some deployment time configuration to kwok that will apply some delay (with jitter) before it moves pods to Ready state?

support customized KubeSchedulerConfiguration

What would you like to be added?

inspired by a user in tw.

I tried kwok and it feels pretty good. But I still can't seem to set the KubeSchedulerConfiguration.

Why is this needed?

we use it to simulate scheduling of our real cluster and want to make the configuration the same as our real cluster

[enhancement] update readme

hi i use fake-kubelet before, and would like to have a try for kwok.
is there any guide for how to use kwok?
look forward for readme enhancement. :)

proposal: auto detect docker or nerdctl runtime when not specified a runtime flag

What would you like to be added?

  1. change the runtime flag default to empty
  2. auto detect by checking whether the docker/nerdctl executable file existed in PATH

I would like to implement it if accepted.

Why is this needed?

nerdctl became more popular as a docker replacement, we have to specify the runtime args or add an env variable for nerdctl users.

it would be more convenience if we can detect it by ourself.

[kwok] Pod lifecycle simulation

This is just a preliminary design

Goals

  • Support simulate the pod's real behavior as much as possible
  • Flexible and configurable
  • Present some templates:
    • Up and running quickly
    • Simulates realistic normal/abnormal behavior

Design 1

kind: Scenario
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: scenario-0
spec:
  selector:
    matchLabels:
      app: demo
  stages:
  - kind: Duration
    delay: 5s
    jitter: 0.2

  - kind: Switch
    cases:
    - weight: 9
      stages: 
      - kind: PresetStatus
        template: PullImage
    - weight: 1
      stages: 
      - kind: PresetStatus
        template: FailedPullImage
      - kind: GoToBegin

  - kind: Duration
    delay: 1s

  - kind: PresetStatus
    template: Creating

  - kind: Duration
    delay: 1s
    jitter: 0.1

  - kind: PresetStatus
    template: NotReady

  - kind: Duration
    delay: 1s
    jitter: 0.1

  - kind: Switch
    cases:
    - weight: 20
      stages: 
      - kind: PresetStatus
        template: Ready
      - kind: End

    - weight: 5
      stages: 
      - kind: PresetStatus
        template: Failed
      - kind: End

    - weight: 5
      - kind: PresetStatus
        template: Restart
      - kind: GoToBegin

Design 2

kind: Stage
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: pod-create
spec:
  resourceRef:
    apiGroup: v1
    kind: Pod
  selector:
    matchExpressions:
      - key: '.metadata.deletionTimestamp'
        operator: 'DoesNotExist'
      - key: '.status.podIP'
        operator: 'DoesNotExist'
  weight: 1
  delay:
    durationSeconds: 1
  next:
    event:
      type: Normal
      reason: Created
      message: Created container
    finalizers:
      add:
        - value: 'kwok.x-k8s.io/fake'
    statusTemplate: |
      {{ $now := Now }}
      conditions:
      {{ if .spec.initContainers }}
        - lastProbeTime: null
          lastTransitionTime: '{{ $now }}'
          message: 'containers with incomplete status: [{{ range .spec.initContainers }} {{ .name }} {{ end }}]'
          reason: ContainersNotInitialized
          status: "False"
          type: Initialized
      {{ else }}
        - lastProbeTime: null
          lastTransitionTime: '{{ $now }}'
          status: "True"
          type: Initialized
      {{ end }}
        - lastProbeTime: null
          lastTransitionTime: '{{ $now }}'
          message: 'containers with unready status: [{{ range .spec.containers }} {{ .name }} {{ end }}]'
          reason: ContainersNotReady
          status: "False"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: '{{ $now }}'
          message: 'containers with unready status: [{{ range .spec.containers }} {{ .name }} {{ end }}]'
          reason: ContainersNotReady
          status: "False"
          type: ContainersReady
      {{ range .spec.readinessGates }}
        - lastTransitionTime: {{ $now }}
          status: "True"
          type: {{ .conditionType }}
      {{ end }}
      {{ if .spec.initContainers }}
      initContainerStatuses:
        {{ range .spec.initContainers }}
        - image: {{ .image }}
          name: {{ .name }}
          ready: false
          restartCount: 0
          started: false
          state:
            waiting:
              reason: PodInitializing
        {{ end }}
      containerStatuses:
        {{ range .spec.containers }}
        - image: {{ .image }}
          name: {{ .name }}
          ready: false
          restartCount: 0
          started: false
          state:
            waiting:
              reason: PodInitializing
        {{ end }}
      {{ else }}
      containerStatuses:
        {{ range .spec.containers }}
        - image: {{ .image }}
          name: {{ .name }}
          ready: false
          restartCount: 0
          started: false
          state:
            waiting:
              reason: ContainerCreating
        {{ end }}
      {{ end }}
      hostIP: {{ with .status.hostIP }} {{ . }} {{ else }} {{ NodeIP }} {{ end }}
      podIP: {{ with .status.podIP }} {{ . }} {{ else }} {{ PodIP }} {{ end }}
      phase: Pending
---
kind: Stage
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: pod-init-container-running
spec:
  resourceRef:
    apiGroup: v1
    kind: Pod
  selector:
    matchExpressions:
      - key: '.metadata.deletionTimestamp'
        operator: 'DoesNotExist'
      - key: '.status.phase'
        operator: 'In'
        values:
          - 'Pending'
      - key: '.status.conditions.[] | select( .type == "Initialized" ) | .status'
        operator: 'NotIn'
        values:
          - 'True'
      - key: '.status.initContainerStatuses.[].state.waiting.reason'
        operator: 'Exists'
  weight: 1
  delay:
    durationSeconds: 5
  next:
    statusTemplate: |
      {{ $now := Now }}
      {{ $root := . }}
      initContainerStatuses:
        {{ range $index, $item := .spec.initContainers }}
        {{ $origin := index $root.status.initContainerStatuses $index }}
        - image: {{ $item.image }}
          name: {{ $item.name }}
          ready: true
          restartCount: 0
          started: true
          state:
            running:
              startedAt: '{{ $now }}'
        {{ end }}
---
kind: Stage
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: pod-init-container-completed
spec:
  resourceRef:
    apiGroup: v1
    kind: Pod
  selector:
    matchExpressions:
      - key: '.metadata.deletionTimestamp'
        operator: 'DoesNotExist'
      - key: '.status.phase'
        operator: 'In'
        values:
          - 'Pending'
      - key: '.status.conditions.[] | select( .type == "Initialized" ) | .status'
        operator: 'NotIn'
        values:
          - 'True'
      - key: '.status.initContainerStatuses.[].state.running.startedAt'
        operator: 'Exists'
  weight: 1
  delay:
    durationSeconds: 5
  next:
    statusTemplate: |
      {{ $now := Now }}
      {{ $root := . }}
      conditions:
        - lastProbeTime: null
          lastTransitionTime: '{{ $now }}'
          status: "True"
          type: Initialized
      initContainerStatuses:
        {{ range $index, $item := .spec.initContainers }}
        {{ $origin := index $root.status.initContainerStatuses $index }}
        - image: {{ $item.image }}
          name: {{ $item.name }}
          ready: true
          restartCount: 0
          started: false
          state:
            terminated:
              exitCode: 0
              finishedAt: '{{ $now }}'
              reason: Completed
              startedAt: '{{ $now }}'
        {{ end }}
      containerStatuses:
        {{ range .spec.containers }}
        - image: {{ .image }}
          name: {{ .name }}
          ready: false
          restartCount: 0
          started: false
          state:
            waiting:
              reason: ContainerCreating
        {{ end }}
---
kind: Stage
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: pod-ready
spec:
  resourceRef:
    apiGroup: v1
    kind: Pod
  selector:
    matchExpressions:
      - key: '.metadata.deletionTimestamp'
        operator: 'DoesNotExist'
      - key: '.status.phase'
        operator: 'In'
        values:
          - 'Pending'
      - key: '.status.conditions.[] | select( .type == "Initialized" ) | .status'
        operator: 'In'
        values:
          - 'True'
      - key: '.status.conditions.[] | select( .type == "ContainersReady" ) | .status'
        operator: 'NotIn'
        values:
          - 'True'
  weight: 1
  delay:
    durationSeconds: 10
  next:
    delete: false
    statusTemplate: |
      {{ $now := Now }}
      {{ $root := . }}
      conditions:
        - lastProbeTime: null
          lastTransitionTime: '{{ $now }}'
          message: ''
          reason: ''
          status: "True"
          type: Ready
        - lastProbeTime: null
          lastTransitionTime: '{{ $now }}'
          message: ''
          reason: ''
          status: "True"
          type: ContainersReady
      containerStatuses:
        {{ range $index, $item := .spec.containers }}
        {{ $origin := index $root.status.containerStatuses $index }}
        - image: {{ $item.image }}
          name: {{ $item.name }}
          ready: true
          restartCount: 0
          started: true
          state:
            running:
              startedAt: '{{ $now }}'
        {{ end }}
      phase: Running
---
kind: Stage
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: pod-completed-for-job
spec:
  resourceRef:
    apiGroup: v1
    kind: Pod
  selector:
    matchExpressions:
      - key: '.metadata.deletionTimestamp'
        operator: 'DoesNotExist'
      - key: '.status.phase'
        operator: 'In'
        values:
          - 'Running'
      - key: '.status.conditions.[] | select( .type == "Ready" ) | .status'
        operator: 'In'
        values:
          - 'True'
      - key: '.metadata.ownerReferences.[].kind'
        operator: 'In'
        values:
          - 'Job'
  weight: 1
  delay:
    durationSeconds: 10
  next:
    delete: false
    statusTemplate: |
      {{ $now := Now }}
      {{ $root := . }}
      containerStatuses:
        {{ range $index, $item := .spec.containers }}
        {{ $origin := index $root.status.containerStatuses $index }}
        - image: {{ $item.image }}
          name: {{ $item.name }}
          ready: true
          restartCount: 0
          started: false
          state:
            terminated:
              exitCode: 0
              finishedAt: '{{ $now }}'
              reason: Completed
              startedAt: '{{ $now }}'
        {{ end }}
      phase: Succeeded
---
kind: Stage
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: pod-remove-finalizer
spec:
  resourceRef:
    apiGroup: v1
    kind: Pod
  selector:
    matchExpressions:
      - key: '.metadata.deletionTimestamp'
        operator: 'Exists'
      - key: '.metadata.finalizers'
        operator: 'In'
        values:
          - 'kwok.x-k8s.io/fake'
  weight: 1
  delay:
    durationSeconds: 1
  next:
    finalizers:
      remove:
        - value: 'kwok.x-k8s.io/fake'
    event:
      type: Normal
      reason: Killing
      message: Stopping container
---
kind: Stage
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: pod-delete
spec:
  resourceRef:
    apiGroup: v1
    kind: Pod
  selector:
    matchExpressions:
      - key: '.metadata.deletionTimestamp'
        operator: 'Exists'
      - key: '.metadata.finalizers'
        operator: 'DoesNotExist'
  weight: 1
  delay:
    durationSeconds: 1
    jitterDurationSecondsFrom:
      expressionFrom: '.metadata.deletionTimestamp'
  next:
    delete: true
package v1alpha1

import (
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// Stage is an API that describes the staged change of a resource
type Stage struct {
	metav1.TypeMeta `json:",inline"`
	// Standard list metadata.
	// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
	metav1.ObjectMeta `json:"metadata,omitempty"`
	// Spec holds information about the request being evaluated.
	Spec StageSpec `json:"spec,omitempty"`
}

// StageSpec defines the specification for Stage.
type StageSpec struct {
	// ResourceRef specifies the Kind and version of the resource.
	ResourceRef StageResourceRef `json:"resourceRef"`
	// Selector specifies the stags will be applied to the selected resource.
	Selector *StageSelector `json:"selector,omitempty"`
	// Weight means the current stage, in case of multiple stages,
	// a random stage will be matched as the next stage based on the weight.
	Weight *int `json:"weight,omitempty"`
	// Delay means there is a delay in this stage.
	Delay *StageDelay `json:"delay,omitempty"`
	// Next indicates that this stage will be moved to.
	Next StageNext `json:"next"`
}

// StageResourceRef specifies the kind and version of the resource.
type StageResourceRef struct {
	// APIGroup of the referent.
	APIGroup *string `json:"apiGroup,omitempty"`
	// Kind of the referent.
	Kind string `json:"kind"`
}

// StageDelay describes the delay time before going to next.
type StageDelay struct {
	// DurationSeconds indicates the stage delay time (in seconds).
	// If JitterDurationSeconds is less than DurationSeconds, then JitterDurationSeconds is used.
	DurationSeconds int `json:"durationSeconds"`
	// DurationSecondsFrom is the expression used to get the value.
	// If it is a time.Time type, getting the value will be minus now() to get DurationSeconds
	// If it is a string type, the value get will be parsed by time.ParseDuration.
	DurationSecondsFrom *ExpressionFromSource `json:"durationSecondsFrom,omitempty"`

	// JitterDurationSeconds is the duration plus an additional amount chosen uniformly
	// at random from the interval between `durationSeconds` and `jitterDurationSeconds`.
	JitterDurationSeconds *int `json:"jitterDurationSeconds,omitempty"`
	// JitterDurationSecondsFrom is the expression used to get the value.
	// If it is a time.Time type, getting the value will be minus now() to get JitterDurationSeconds
	// If it is a string type, the value get will be parsed by time.ParseDuration.
	JitterDurationSecondsFrom *ExpressionFromSource `json:"jitterDurationSecondsFrom,omitempty"`
}

// StageNext describes a stage will be moved to.
type StageNext struct {
	// Event means that an event will be sent.
	Event *StageEvent `json:"event,omitempty"`
	// Finalizers means that finalizers will be modified.
	Finalizers *StageFinalizers `json:"finalizers,omitempty"`
	// Delete means that the resource will be deleted if true.
	Delete bool `json:"delete,omitempty"`
	// StatusTemplate indicates the template for modifying the status of the resource in the next.
	StatusTemplate string `json:"statusTemplate,omitempty"`
}

// StageFinalizers describes the modifications in the finalizers of a resource.
type StageFinalizers struct {
	// Add means that the Finalizers will be added to the resource.
	Add []FinalizerItem `json:"add,omitempty"`
	// Remove means that the Finalizers will be removed from the resource.
	Remove []FinalizerItem `json:"remove,omitempty"`
	// Empty means that the Finalizers for that resource will be emptied.
	Empty bool `json:"empty,omitempty"`
}

// FinalizerItem  describes the one of the finalizers.
type FinalizerItem struct {
	Value string `json:"value,omitempty"`
}

// StageEvent describes one event in the Kubernetes.
type StageEvent struct {
	// Type is the type of this event (Normal, Warning), It is machine-readable.
	Type string `json:"type,omitempty"`
	// Reason is why the action was taken. It is human-readable.
	Reason string `json:"reason,omitempty"`
	// Message is a human-readable description of the status of this operation.
	Message string `json:"message,omitempty"`
}

// StageSelector is a resource selector. the result of matchLabels and matchAnnotations and
// matchExpressions are ANDed. An empty resource selector matches all objects. A null
// resource selector matches no objects.
type StageSelector struct {
	// MatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
	// map is equivalent to an element of matchExpressions, whose key field is ".metadata.labels[key]", the
	// operator is "In", and the values array contains only "value". The requirements are ANDed.
	MatchLabels map[string]string `json:"matchLabels,omitempty"`
	// MatchAnnotations is a map of {key,value} pairs. A single {key,value} in the matchAnnotations
	// map is equivalent to an element of matchExpressions, whose key field is ".metadata.annotations[key]", the
	// operator is "In", and the values array contains only "value". The requirements are ANDed.
	MatchAnnotations map[string]string `json:"matchAnnotations,omitempty"`
	// MatchExpressions is a list of label selector requirements. The requirements are ANDed.
	MatchExpressions []SelectorRequirement `json:"matchExpressions,omitempty"`
}

// SelectorRequirement is a resource selector requirement is a selector that contains values, a key,
// and an operator that relates the key and values.
type SelectorRequirement struct {
	// The name of the scope that the selector applies to.
	Key string `json:"key"`
	// Represents a scope's relationship to a set of values.
	Operator SelectorOperator `json:"operator"`
	// An array of string values.
	// If the operator is In, NotIn, Intersection or NotIntersection, the values array must be non-empty.
	// If the operator is Exists or DoesNotExist, the values array must be empty.
	Values []string `json:"values,omitempty"`
}

// SelectorOperator is a label selector operator is the set of operators that can be used in a selector requirement.
type SelectorOperator string

var (
	SelectorOpIn           SelectorOperator = "In"
	SelectorOpNotIn        SelectorOperator = "NotIn"
	SelectorOpExists       SelectorOperator = "Exists"
	SelectorOpDoesNotExist SelectorOperator = "DoesNotExist"
)

// ExpressionFromSource represents a source for the value of a from.
type ExpressionFromSource struct {
	// ExpressionFrom is the expression used to get the value.
	ExpressionFrom string `json:"expressionFrom,omitempty"`
}

Failed start cluster with runtime nerdctl

Which jobs are failing?

test-kwokctl (ubuntu-latest, nerdctl)

Which tests are failing?

Testing audit on nerdctl for 1.20.15
{"time":"2022-12-06T08:32:14.051783288Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:172","msg":"Creating cluster","cluster":"audit-cluster-nerdctl-1-20-15"}
{"time":"2022-12-06T08:32:14.926957514Z","level":"INFO","source":"/home/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:230","msg":"Starting cluster","cluster":"audit-cluster-nerdctl-1-20-15"}
time="2022-12-06T08:32:14Z" level=info msg="Creating network kwok-audit-cluster-nerdctl-1-20-15"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: service kube_apiserver: [Links]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: service kube_scheduler: [Links]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: service kwok_controller: [Links]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: service kube_controller_manager: [Links]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=warning msg="Ignoring: volume: Bind: [CreateHostPath]"
time="2022-12-06T08:32:14Z" level=info msg="Ensuring image registry.k8s.io/etcd:3.4.13-0"
time="2022-12-06T08:32:14Z" level=info msg="Ensuring image registry.k8s.io/kube-apiserver:v1.20.15"
time="2022-12-06T08:32:14Z" level=info msg="Ensuring image registry.k8s.io/kube-scheduler:v1.20.15"
time="2022-12-06T08:32:15Z" level=info msg="Ensuring image local/kwok:test"
time="2022-12-06T08:32:15Z" level=info msg="Ensuring image registry.k8s.io/kube-controller-manager:v1.20.15"
time="2022-12-06T08:32:15Z" level=info msg="Creating container kwok-audit-cluster-nerdctl-1-20-15-kube-scheduler"
time="2022-12-06T08:32:15Z" level=info msg="Creating container kwok-audit-cluster-nerdctl-1-20-15-kube-controller-manager"
time="2022-12-06T08:32:15Z" level=info msg="Creating container kwok-audit-cluster-nerdctl-1-20-15-kwok-controller"
time="2022-12-06T08:32:15Z" level=info msg="Creating container kwok-audit-cluster-nerdctl-1-20-15-etcd"
time="2022-12-06T08:32:15Z" level=info msg="Creating container kwok-audit-cluster-nerdctl-1-20-15-kube-apiserver"
time="2022-12-06T08:32:16Z" level=fatal msg="error while creating container kwok-audit-cluster-nerdctl-1-20-15-kube-controller-manager: exit status 1"
{"time":"2022-12-06T08:32:16.514310888Z","level":"ERROR","source":"/home/runner/work/kwok/kwok/cmd/kwokctl/main.go:37","msg":"Execute exit","err":"failed to start cluster \"kwok-audit-cluster-nerdctl-1-20-15\": nerdctl compose up -d --quiet-pull: exit status 1"}

Since when has it been failing?

None

Reason for failure (if possible)

No response

Anything else we need to know?

No response

cannot build image: invalid reference format

How to use it?

  • kwok
  • kwokctl --runtime=docker (default runtime)
  • kwokctl --runtime=binary
  • kwokctl --runtime=nerdctl
  • kwokctl --runtime=kind

What happened?

image

What did you expect to happen?

can build image

How can we reproduce it (as minimally and precisely as possible)?

  1. git clone https://github.com/kubernetes-sigs/kwok.git
  2. make build-image

Anything else we need to know?

kwok/Makefile

Lines 52 to 55 in a22f3cf

KWOK_IMAGE ?= $(STAGING_IMAGE_PREFIX)/kwok
CLUSTER_IMAGE ?= $(STAGING_IMAGE_PREFIX)/cluster

If STAGING_IMAGE_PREFIX is not set, then KWOK_IMAGE and CLUSTER_IMAGE is /kwok and /cluster. This is the wrong format.

OS version

```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here

Linux kk-instance-swjs2 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

On Darwin:

$ uname -a

paste output here

On Windows:

C:> wmic os get Caption, Version, BuildNumber, OSArchitecture

paste output here

</details>

Units as part of the field name for the duration field

https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#units

Units must either be explicit in the field name (e.g., timeoutSeconds), or must be specified as part of the value (e.g., resource.Quantity). Which approach is preferred is TBD, though currently we use the fooSeconds convention for durations.
Duration fields must be represented as integer fields with units being part of the field name (e.g. leaseDurationSeconds). We don't use Duration in the API since that would require clients to implement go-compatible parsing.

Originally posted by @wzshiming in #74 (comment)

Failed start cluster with runtime kind

Which jobs are failing?

test-kwokctl (macos-latest, kind)

Which tests are failing?

Test workable on kind for 1.25.3
------------------------------
Testing workable on kind for 1.25.3
{"time":"2022-12-06T08:31:23.436495Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:172","msg":"Creating cluster","cluster":"cluster-kind-1-25-3"}
{"time":"2022-12-06T08:31:23.519019Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/runtime/kind/cluster.go:134","msg":"Pull image","cluster":"cluster-kind-1-25-3","image":"docker.io/kindest/node:v1.25.3"}
{"time":"2022-12-06T08:32:12.157223Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/runtime/kind/cluster.go:134","msg":"Pull image","cluster":"cluster-kind-1-25-3","image":"docker.io/prom/prometheus:v2.35.0"}
{"time":"2022-12-06T08:32:23.802416Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:230","msg":"Starting cluster","cluster":"cluster-kind-1-25-3"}
Creating cluster "kwok-cluster-kind-1-25-3" ...
 β€’ Ensuring node image (docker.io/kindest/node:v1.25.3) πŸ–Ό  ...
 βœ“ Ensuring node image (docker.io/kindest/node:v1.25.3) πŸ–Ό
 β€’ Preparing nodes πŸ“¦   ...
 βœ“ Preparing nodes πŸ“¦ 
 β€’ Writing configuration πŸ“œ  ...
 βœ“ Writing configuration πŸ“œ
 β€’ Starting control-plane πŸ•ΉοΈ  ...
 βœ“ Starting control-plane πŸ•ΉοΈ
 β€’ Installing CNI πŸ”Œ  ...
 βœ“ Installing CNI πŸ”Œ
 β€’ Installing StorageClass πŸ’Ύ  ...
 βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-kwok-cluster-kind-1-25-3"
You can now use your cluster with:

kubectl cluster-info --context kind-kwok-cluster-kind-1-25-3

Not sure what to do next? πŸ˜…  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
You can now use your cluster with:
{"time":"2022-12-06T08:34:49.843114Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/cmd/create/cluster/cluster.go:241","msg":"Cluster is ready","cluster":"cluster-kind-1-25-3"}

    kubectl config use-context kwok-cluster-kind-1-25-3

Thanks for using kwok!
deployment.apps/fake-pod created
node/fake-node created
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
node/fake-node unchanged
deployment.apps/fake-pod unchanged
W1206 08:33:08.167574       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1206 08:33:08.167937       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W1206 08:33:08.326104       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1206 08:33:08.327120       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W1206 08:33:08.425604       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1206 08:33:08.427971       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W1206 08:33:08.427121       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1206 08:33:08.430158       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W1206 08:33:08.427503       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1206 08:33:08.436017       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W1206 08:33:08.769992       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1206 08:33:08.77[135](https://github.com/kubernetes-sigs/kwok/actions/runs/3627782836/jobs/6118035840#step:12:136)2       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W1206 08:33:08.888528       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1206 08:33:08.889922       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W1206 08:33:08.995337       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1206 08:33:08.996560       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W1206 08:33:09.021157       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1206 08:33:09.022166       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W1206 08:33:09.343054       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1206 08:33:09.343424       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W1206 08:33:09.344805       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1206 08:33:09.345147       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W1206 08:33:09.345847       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1206 08:33:09.365594       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W1206 08:33:09.592069       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1206 08:33:09.595[137](https://github.com/kubernetes-sigs/kwok/actions/runs/3627782836/jobs/6118035840#step:12:138)       1 reflector.go:[140](https://github.com/kubernetes-sigs/kwok/actions/runs/3627782836/jobs/6118035840#step:12:141)] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W1206 08:33:09.734679       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1206 08:33:09.735737       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W1206 08:33:10.103402       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1206 08:33:10.103754       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W1206 08:33:11.715565       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1206 08:33:11.716900       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W1206 08:33:11.805839       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1206 08:33:11.8[141](https://github.com/kubernetes-sigs/kwok/actions/runs/3627782836/jobs/6118035840#step:12:142)57       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W1206 08:33:12.362[144](https://github.com/kubernetes-sigs/kwok/actions/runs/3627782836/jobs/6118035840#step:12:145)       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1206 08:33:12.363418       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W1206 08:33:12.833036       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1206 08:33:12.836530       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W1206 08:33:15.304614       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1206 08:33:15.304974       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I1206 08:33:22.532006       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1206 08:33:25.445830       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-scheduler...
I1206 08:33:25.689098       1 leaderelection.go:258] successfully acquired lease kube-system/kube-scheduler

kwokctl --name=cluster-kind-1-25-3 logs kwok-controller
Error from server (NotFound): pods "kwok-controller" not found
{"time":"2022-12-06T08:35:37.327101Z","level":"ERROR","source":"/Users/runner/work/kwok/kwok/cmd/kwokctl/main.go:37","msg":"Execute exit","err":"/usr/local/bin/kubectl logs -n kube-system kwok-controller: exit status 1"}

{"time":"2022-12-06T08:35:37.397852Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:63","msg":"Stopping cluster","cluster":"cluster-kind-1-25-3"}
Deleting cluster "kwok-cluster-kind-1-25-3" ...
{"time":"2022-12-06T08:35:39.278867Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:69","msg":"Deleting cluster","cluster":"cluster-kind-1-25-3"}
{"time":"2022-12-06T08:35:39.279793Z","level":"INFO","source":"/Users/runner/work/kwok/kwok/pkg/kwokctl/cmd/delete/cluster/cluster.go:74","msg":"Cluster deleted","cluster":"cluster-kind-1-25-3"}
------------------------------
------------------------------
Error: Some tests failed
 - create_cluster_cluster-kind-1-25-3
------------------------------
Test kwokctl/kwokctl_kind failed.
================================================================================
Error: Some tests failed
 - kwokctl/kwokctl_kind

Since when has it been failing?

None

Reason for failure (if possible)

No response

Anything else we need to know?

No response

Kwokctl failed to create cluster by default image when runtime is kind

Please provide an in-depth description of the question you have:

When you run the following command to install the cluster, the installation fails

➜  kwok kwokctl create cluster --name kwok1 --runtime kind
Creating cluster                                                   cluster=kwok1
Pull image                    cluster=kwok1 image=docker.io/kindest/node:unknown
Error response from daemon: manifest for kindest/node:unknown not found: manifest unknown: manifest unknown
ERROR Failed to setup configcluster=kwok1 err="docker pull docker.io/kindest/node:unknown: exit status 1"
Cluster is cleaned up                                              cluster=kwok1
ERROR Execute exit err="docker pull docker.io/kindest/node:unknown: exit status 1"

What do you think about this question?:

The cause is that the values of environment variables KWOK_KUBE_VERSION and KWOK_VERSION are read as image tags by default. If the environment variables do not exist, the following reference is used,when pull an image, the image does not exist

var (
	Version      = "unknown"
	KubeVersion  = "unknown"
	ImagePrefix  = "registry.k8s.io/kwok"
	BinaryPrefix = "https://github.com/kubernetes-sigs/kwok/releases/download"
	BinaryName   = "kwok-" + runtime.GOOS + "-" + runtime.GOARCH
)

So I understand is it possible to provide a default installation version when the user does not specify environment variables or parameters?

Environment:
kwokctl version: latest

[kwokctl] Add stop/start commands

What would you like to be added?

Add stop command for cluster.

Why is this needed?

Mostly, I just want to stop the running of cluster when I don't need it, NOT deleting.

[kwokctl] Add `--wait` for create cluster

What would you like to be added?

 --wait duration       wait for the cluster to be ready (default 0s)

Why is this needed?

For those cases where cluster ready is strictly required

[kwok] Provided service for `exec`

# Match pods
kind: ClusterExec
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: cluster-exec-rules
spec:
  selector:
    matchNamespaces:
      - podNamespace
    matchNames:
      - podName
  execs: []
--- 
# Just for a Pod
kind: Exec
apiVersion: kwok.x-k8s.io/v1alpha1
metadata:
  name: podName
  namespace: podNamespace
spec:
  execs:
    - containers:
        - containerName
      local:
        workDir: "/workdir"
        envs:
          - <envkey>=<envvalue>

      # Maybe support ssh in future
      # ssh:
      #   address:
      #  # todo

/kind feature

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.