Giter Club home page Giter Club logo

sonobuoy-plugins's Introduction

Sonobuoy Plugins

This repository contains additional plugins for Sonobuoy which aren't necessarily included in the main build.

You can reach out to us on slack or create an issue if you have any questions. Feel free to suggest improvements to existing plugins or new plugins you'd love to see.

sonobuoy-plugins's People

Contributors

andrewyunt avatar barthy1 avatar dependabot[bot] avatar franknstyle avatar johnschnake avatar kuzm1ch avatar laevos avatar madddi avatar mantoine96 avatar mrporcles avatar nikhita avatar phillipsj avatar poidag-zz avatar samir900 avatar vladimirvivien avatar zubron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

sonobuoy-plugins's Issues

Update Dockerfile to better use cache

The copying of go files should be go.mod/sum, then go mod download, then copy other files. This way editing general project files dont invalidate the cache.

kube-bench crashloopbackoff

The container that runs kube-bench has some misleading information in the logs because results are published but it looks like it is in a crashloopbackoff.

The kube-bench container is completing successfully but the restartPolicy of Always is causing the kubelet to consider that a crash and it tries to restart it.

The solution we used for the logs gathering plugin is to just sleep because you can't change the restart policy of a daemonset to anything other than Always

In the logs gathering script we write:

    spec:
      command:
      - /bin/sh
      - -c
      - /get_systemd_logs.sh && while true; do echo "Sleeping for 1h to avoid daemonset
        restart"; sleep 3600; done

We should just add that sleep here as well.

This is definitely an annoyance. In the past we didn't have a better solution because there wasnt a run-once type of daemonset. I dont think that has changed on that front; I'm not sure if we had considered writing plugins using initContainers though. If we do that then if the plugin just needs to generate data ti can exit successfully then the sonobuoy worker would start and find/report the data.

systemd-logs sleep

Currently, a script is run which exits pretty quickly. Depending on the situation, that may be fine. However, Sonobuoy has to fight k8s a bit because k8s wants to keep restarting daemonsets.

We work around this by having the sonobuoy worker stay alive/sleep forever but even this container exiting causes container restarts and a state of crashloopbackoff.

This plugin, when run as a daemonset, needs to just sleep indefinitely after getting results.

Plugin-helper: Completed count incorrect

When finishing a test, this code currently increments 'completed' regardless of the result. This is intuitive but the naming is a bit wonky. See vmware-tanzu/sonobuoy#1591

To be consistent with how the e2e tests report things (which is the most well trod path) we need to have completed not include the failed tests.

CIS kube-bench not publshed

An update to the CIS plugin using version 0.5.0 of kube-bench was checked on March 3, but there is no sonobuoy/kube-bench:0.5.0 at docker hub.

sonobuoy e2e test keeps waiting and doesn't finish

I am running sonobuoy e2e tests on a bare-metal k8s cluster in certified-conformance mode.

The tests are not finishing and the sonobuoy pods and some objects are not being cleaned up.

I am using sonobuoy 0.18.2 release.

12:40:19 + sonobuoy run --plugin e2e -m certified-conformance --context target-cluster --kube-conformance-image gcr.io/google-containers/conformance:v1.18.6 --kubeconfig /root/.airship/kubeconfig --log_dir /tmp/sonobuoy_snapshots/e2e
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="created object" name=sonobuoy namespace= resource=namespaces
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="created object" name=sonobuoy-serviceaccount namespace=sonobuoy resource=serviceaccounts
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="object already exists" name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterrolebindings
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="object already exists" name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterroles
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="created object" name=sonobuoy-config-cm namespace=sonobuoy resource=configmaps
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="created object" name=sonobuoy-plugins-cm namespace=sonobuoy resource=configmaps
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="created object" name=sonobuoy namespace=sonobuoy resource=pods
12:40:19 time="2021-03-16T16:40:28Z" level=info msg="created object" name=sonobuoy-master namespace=sonobuoy resource=services
12:40:19 + kubectl get all -n sonobuoy --kubeconfig /root/.airship/kubeconfig --context target-cluster
12:40:19 NAME           READY   STATUS              RESTARTS   AGE
12:40:19 pod/sonobuoy   0/1     ContainerCreating   0          1s
12:40:19 
12:40:19 NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
12:40:19 service/sonobuoy-master   ClusterIP   10.96.112.39   <none>        8080/TCP   1s
12:40:19 + sonobuoy status --kubeconfig /root/.airship/kubeconfig --context target-cluster
12:40:19 time="2021-03-16T16:40:28Z" level=error msg="error attempting to run sonobuoy: pod has status \"Pending\""
12:40:19 Conformance tests have run
12:40:19 listing all sonobuoy components
12:40:19 NAME           READY   STATUS              RESTARTS   AGE
12:40:19 pod/sonobuoy   0/1     ContainerCreating   0          1s
12:40:19 
12:40:19 NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
12:40:19 service/sonobuoy-master   ClusterIP   10.96.112.39   <none>        8080/TCP   1s

Any idea on how to check the failed/unfinished tests and why the objects are not being cleaned.

CIS scans throw panic on kind clusters frequently

We noticed CIS scans failed to run on kind clusters in our tests pretty frequently recently. We found the following logs in sonobuoy-kube-bench-master-daemon-set pods that seem like sonobuoy threw panic when it began to run the scans. The scans run on Sonobuoy 1.17 and 1.16.3 kind clusters

Logs
runtime: mlock of signal stack failed: 12 runtime: increase the mlock limit (ulimit -l) or runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+ fatal error: mlock failed runtime stack: runtime.throw(0x9c4bde, 0xc) /usr/local/go/src/runtime/panic.go:1112 +0x72 runtime.mlockGsignal(0xc000304300) /usr/local/go/src/runtime/os_linux_x86.go:72 +0x107 runtime.mpreinit(0xc000234700) /usr/local/go/src/runtime/os_linux.go:341 +0x78 runtime.mcommoninit(0xc000234700) /usr/local/go/src/runtime/proc.go:630 +0x108 runtime.allocm(0xc000051000, 0x9eb858, 0x0) /usr/local/go/src/runtime/proc.go:1390 +0x14e runtime.newm(0x9eb858, 0xc000051000) /usr/local/go/src/runtime/proc.go:1704 +0x39 runtime.startm(0x0, 0xc000107301) /usr/local/go/src/runtime/proc.go:1869 +0x12a runtime.wakep(...) /usr/local/go/src/runtime/proc.go:1953 runtime.resetspinning() /usr/local/go/src/runtime/proc.go:2415 +0x93 runtime.schedule() /usr/local/go/src/runtime/proc.go:2527 +0x2de runtime.mstart1() /usr/local/go/src/runtime/proc.go:1104 +0x8e runtime.mstart() /usr/local/go/src/runtime/proc.go:1062 +0x6e goroutine 1 [syscall]: syscall.Syscall(0x3, 0xc, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/go/src/syscall/asm_linux_amd64.s:18 +0x5 syscall.Close(0xc, 0xc00000d820, 0x4) /usr/local/go/src/syscall/zsyscall_linux_amd64.go:285 +0x40 syscall.forkExec(0x9c1fb7, 0x7, 0xc0002c0930, 0x3, 0x3, 0xc0003a1190, 0x45, 0x46283ba300000400, 0xc00047b000) /usr/local/go/src/syscall/exec_unix.go:209 +0x39f syscall.StartProcess(...) /usr/local/go/src/syscall/exec_unix.go:248 os.startProcess(0x9c1fb7, 0x7, 0xc0002c0930, 0x3, 0x3, 0xc0003a1328, 0x0, 0x0, 0x0) /usr/local/go/src/os/exec_posix.go:52 +0x2c0 os.StartProcess(0x9c1fb7, 0x7, 0xc0002c0930, 0x3, 0x3, 0xc0003a1328, 0x45, 0x0, 0x203000) /usr/local/go/src/os/exec.go:102 +0x7c os/exec.(*Cmd).Start(0xc00053ab00, 0x503801, 0xc000120cd0) /usr/local/go/src/os/exec/exec.go:417 +0x50c os/exec.(*Cmd).Run(0xc00053ab00, 0xc000120cd0, 0x2) /usr/local/go/src/os/exec/exec.go:337 +0x2b os/exec.(*Cmd).Output(0xc00053ab00, 0x7, 0xc0003a1480, 0x2, 0x2, 0xc00053ab00) /usr/local/go/src/os/exec/exec.go:541 +0x88 github.com/aquasecurity/kube-bench/check.isShellCommand(0xc0004ec380, 0x9, 0xe3c401) /go/src/github.com/aquasecurity/kube-bench/check/check.go:253 +0xf9 github.com/aquasecurity/kube-bench/check.runExecCommands(0xc000023740, 0x30, 0xc00012f460, 0x3, 0x4, 0xc0002c0780, 0x0, 0x0, 0x0, 0x0) /go/src/github.com/aquasecurity/kube-bench/check/check.go:290 +0x84 github.com/aquasecurity/kube-bench/check.performTest(0xc000023740, 0x30, 0xc00012f460, 0x3, 0x4, 0xc000526b10, 0x0, 0x0, 0xc0002c06c0, 0x0, ...) /go/src/github.com/aquasecurity/kube-bench/check/check.go:270 +0xbd github.com/aquasecurity/kube-bench/check.(*Check).run(0xc000529000, 0xc0003a1948, 0xc000108f80) /go/src/github.com/aquasecurity/kube-bench/check/check.go:133 +0x219 github.com/aquasecurity/kube-bench/check.(*defaultRunner).Run(0xe3b458, 0xc000529000, 0x1, 0x3) /go/src/github.com/aquasecurity/kube-bench/check/check.go:100 +0x2b github.com/aquasecurity/kube-bench/check.(*Controls).RunChecks(0xc00002c480, 0xa8ce00, 0xe3b458, 0xc000108f80, 0x101, 0xc000108f80, 0x0, 0x0) /go/src/github.com/aquasecurity/kube-bench/check/controls.go:101 +0x19e github.com/aquasecurity/kube-bench/cmd.runChecks(0xc00024d7ec, 0x6, 0xc00024d7e0, 0x17) /go/src/github.com/aquasecurity/kube-bench/cmd/common.go:120 +0x68e github.com/aquasecurity/kube-bench/cmd.run(0xc000258260, 0x1, 0x1, 0xc000206e60, 0x7, 0xc000206e01, 0x7) /go/src/github.com/aquasecurity/kube-bench/cmd/run.go:67 +0x1e8 github.com/aquasecurity/kube-bench/cmd.glob..func4(0xe065e0, 0xc000232090, 0x0, 0x9) /go/src/github.com/aquasecurity/kube-bench/cmd/run.go:49 +0x362 github.com/spf13/cobra.(*Command).execute(0xe065e0, 0xc000232000, 0x9, 0x9, 0xe065e0, 0xc000232000) /go/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x29d github.com/spf13/cobra.(*Command).ExecuteC(0xe06f60, 0xe3b458, 0x0, 0x0) /go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ea github.com/spf13/cobra.(*Command).Execute(...) /go/pkg/mod/github.com/spf13/[email protected]/command.go:800 github.com/aquasecurity/kube-bench/cmd.Execute() /go/src/github.com/aquasecurity/kube-bench/cmd/root.go:115 +0x55 main.main() /go/src/github.com/aquasecurity/kube-bench/main.go:22 +0x20 goroutine 18 [chan receive]: github.com/golang/glog.(*loggingT).flushDaemon(0xe109a0) /go/pkg/mod/github.com/golang/[email protected]/glog.go:882 +0x8b created by github.com/golang/glog.init.0 /go/pkg/mod/github.com/golang/[email protected]/glog.go:410 +0x26f Sleeping for 1h to avoid daemonset restart

kind version: v1.16.3
sonobuoy version: we are using github.com/zubron/sonobuoy v1.11.5-prerelease.1.0.20200706195956-8ef2fd901589 because of some dependency reasons

Cluster-inventory seems to be no longer working with Sonobuoy 0.55.1

Hi everybody.

I want to update Sonobuoy from v0.54.0 to v0.55.1 but our cluster-inventory plugin (v.0.0.2) is failing. In short the error seems to be error gathering host data: open /tmp/results/results.tar.gz: no such file or directory . The whole Gist (command and logs) can be found here.

Any help would be much appreciated.

Upgrade to latest version of kube-bench for CIS benchmark

We are currently using version 0.2.1 of kube-bench, the latest available version is 0.2.3. This includes a number of new changes, most importantly the checks for the 1.5 version of the CIS benchmark.

The structure of checks for CIS 1.5 is different from earlier versions, it now includes 5 different sections. To target each of these, kube-bench introduced a new run command where each of these targets could be specified.

There is an issue with the use of this command however. If multiple targets are specified, and the --outputfile flag is being used, only one set of results will be available as it is overwritten during the tests for each target. We can work around this in the plugin by running kube-bench once for each target and using a tarball as the results file.

With the way we are currently running kube-bench in our yaml file, adding multiple more calls and other commands to create the tarball will make the yaml difficult to parse/maintain. We may want to consider a wrapper script or something where the user could select which targets to run, for example by using the plugin environment variables.

Plugin-helper: sonobuoy doesnt pick up default results file

The default sonobuoy results writer writes to "manual_results.yaml" but the logic for manual processing looks for "sonobuoy_results.yaml".

Regardless of which way we resolve this, the default should work better.

We can:

  • change the default name the helper uses
  • make sonobuoy look for other *.yaml files and not just a specific name

who-can plugin throws "server doesnt have resource type" error

Hi,

Thank you for creating who-can plugin. I am trying to run the plugin in my cluster and I get the below error:
running who can: running checker: resolving resource: the server doesn't have a resource type "pods/node_netstat_TcpExt_SyncookiesFailed"

Please advise as I am not sure if this is a limitation of the plugin that it cannot recognize named resources such as pods/mypodname. If so, could you help me workaround this problem.

Design Doc

Ideas and brainstorming in #87 but I want to do a better job of planning it all out before starting coding since I don't want this to just be a one off for reach feature. I want there to eventually be a few (not necessarily just one) well scoped values to choose from.

Post-Processor: override result?

When talking to a few users, the issue came up that some things that look like failures in some test suites are not actual failures. You might argue that the test suite authors should modify the test, but since there are innumerable configurations, a test author can't know all the ways a cluster might be implemented and what special cases could be possible. E.g. it checks for some security feature which ends up missing (failure) but the cluster itself is airgapped therefore the feature isn't necessary and/or can't even be configured/tested.

The obvious concern is "can users then add post-processing to modify results to look like all tests passed and get certified conformance or other benefits?" Yes and no. Obviously they could use it to modify the results, but there is NOTHING from stopping them from modifying the results after the fact now anyways. This doesn't seem to be a new issue. In addition, even with the post-processor, the sonobuoy tarball itself would show the use and configuration of the post-processor so it would still require after-the-fact editing to try and pass off mocked results.

Inventory Plugin - support for CSI

As a user of the Inventory plugin, I should be able to retrieve the following information:

  • Ability to discover CSI providers
  • Storage classes
  • Mapping pods to PersitentVolumes

unable to get benchmark version

hi!

environment: PKS 1.9 (k8s: 1.18.8)
actual entpks yaml, with sonobuoy-v1.9.0 and sonobuoy/kube-bench 0.4.0

i receive following error:

... entpks/kube-bench-plugin.yaml
...
  env:
    - name: KUBERNETES_VERSION
      value: "1.18"
    - name: TARGET_MASTER
      value: "false"
    - name: TARGET_NODE
      value: "true"
....
$ ./sonobuoy run --plugin sonobuoy-plugins/cis-benchmarks/entpks/kube-bench-plugin.yaml --sonobuoy-image privateregistry/sonobuoy:v0.19.0 --wait
....
$ kubectl get pods -n sonobuoy
NAME                                                         READY   STATUS    RESTARTS   AGE
sonobuoy                                                     1/1     Running   0          38s
sonobuoy-kube-bench-node-daemon-set-937acd87e9c0418f-blfjr   2/2     Running   0          33s
sonobuoy-kube-bench-node-daemon-set-937acd87e9c0418f-g6bkv   2/2     Running   0          33s
sonobuoy-kube-bench-node-daemon-set-937acd87e9c0418f-mq6x2   2/2     Running   0          33s
sonobuoy-kube-bench-node-daemon-set-937acd87e9c0418f-nb55v   2/2     Running   0          33s

$ kubectl logs pod/sonobuoy-kube-bench-node-daemon-set-937acd87e9c0418f-blfjr  -n sonobuoy plugin                                                          

unable to get benchmark version. error: unable to find a matching Benchmark Version match for kubernetes version: 1.18
Sleeping for 1h to avoid daemonset restart

any ideas what could be the reason for this?
thanks!

Sonobuoy aqua plugin failed to start

Platform: Openshift 4.4
Kubernetes server version: v1.17.1+912792b

~ wget https://raw.githubusercontent.com/vmware-tanzu/sonobuoy-plugins/master/kube-hunter/kube-hunter-plugin.yaml
 ~ sonobuoy version
Sonobuoy Version: v0.18.3
MinimumKubeVersion: 1.16.0
MaximumKubeVersion: 1.18.99
GitSHA: 3e8a10e5145f21840b308b76487f3e10b9c1261d
API Version check skipped due to missing `--kubeconfig` or other error
โžœ  ~ sonobuoy run -f kube-hunter-plugin.yaml  --dns-namespace openshift-dns --dns-pod-labels dns.operator.openshift.io/daemonset-dns=default
ERRO[0000] error attempting to run sonobuoy: couldn't decode template: unmarshalerDecoder: Object 'Kind' is missing in '{"sonobuoy-config":{"driver":"Job","plugin-name":"kube-hunter","result-format":"raw"},"spec":{"command":["/bin/sh","-c","python kube-hunter.py --pod --report=json | tee /tmp/results/report.json \u0026\u0026 echo -n /tmp/results/report.json \u003e /tmp/results/done"],"image":"sonobuoy/kube-hunter:v0.2.0","name":"plugin","resources":{},"volumeMounts":[{"mountPath":"/tmp/results","name":"results"}]}}', error found in #10 byte of ...|sults"}]}}|..., bigger context ...|:[{"mountPath":"/tmp/results","name":"results"}]}}|...

Plugin-helper: done logic should write done file

We have the pluginhelper.Done() and SonobuoyResultsWriter.Done(). The former writes the done file and the latter serailizes results to disc. We should make it so that you only have to invoke resultsWriter.Done() instead of make two calls. Maybe even just gated by a boolean parameter.

Kube-bench-master plugin returns Status: unknown

When I run the CIS benchmark in my RKE Kubernetes cluster the Kube-bench-master plugin always returns Status: unknown. I've tried viewing the detailed results, but none are provided. How can I figure out what the underlying problem is? I can run the CIS Benchmark from the Rancher UI and don't see this problem. Thanks!

$ sonobuoy version --kubeconfig ~/.kube/config
Sonobuoy Version: v0.18.3
MinimumKubeVersion: 1.16.0
MaximumKubeVersion: 1.18.99
GitSHA: 3e8a10e5145f21840b308b76487f3e10b9c1261d
API Version: v1.17.4

$ sonobuoy run --plugin plug-ins/kube-bench-master-plugin.yaml --wait
INFO[0000] created object name=sonobuoy namespace= resource=namespaces
INFO[0000] created object name=sonobuoy-serviceaccount namespace=sonobuoy resource=serviceaccounts
INFO[0000] created object name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterrolebindings
INFO[0000] created object name=sonobuoy-serviceaccount-sonobuoy namespace= resource=clusterroles
INFO[0000] created object name=sonobuoy-config-cm namespace=sonobuoy resource=configmaps
INFO[0000] created object name=sonobuoy-plugins-cm namespace=sonobuoy resource=configmaps
INFO[0000] created object name=sonobuoy namespace=sonobuoy resource=pods
INFO[0000] created object name=sonobuoy-master namespace=sonobuoy resource=services

$ sonobuoy retrieve
202006221914_sonobuoy_1832bb9a-52a7-4c72-989f-c459a395b4e2.tar.gz

$ sonobuoy results 202006221914_sonobuoy_1832bb9a-52a7-4c72-989f-c459a395b4e2.tar.gz
Plugin: kube-bench-master
Status: unknown
Total: 1
Passed: 0
Failed: 0
Skipped: 0
unknown: 1

$ bin/sonobuoy results 202006221914_sonobuoy_1832bb9a-52a7-4c72-989f-c459a395b4e2.tar.gz --plugin kube-bench-master --mode detailed
{"name":"kube-bench-master","status":"unknown","meta":{"path":"","type":"summary"}}

[CIS] Only add compatible targets if the version is known

The CIS benchmark plugin attempts to run any targets specified by the user even if they are incompatible with the Kubernetes version under test. This is proving to be a poor user experience in the case where they have already provided the version.

With #19, we added a utility function to help check whether a version was less than or equal to another version. Given that we have this capability, we should add checks into the run-kube-bench.sh script to only add the kube-bench targets which are compatible with the CIS benchmark version that will be run.

Systemd-logs 409

When the systemd-logs plugin runs/finishes, it exits which causes it (as a deaemonset) to restart. It then tries to restart and runs again, submits results and gets an error (409) when submitting results. It repeats this loop forever.

There was some discussion upstream about run-once daemonsets, not sure if something there can be done.

Otherwise we need to check for results and/or not exit/error. This eats up resources and fills up logs.

Update build logic/target

Currently went in pointing to my own custom repo; we should build one for the sonobuoy repo and use more explicit versioning when we update it.

Design Doc

Ideas and brainstorming in #87 but I want to do a better job of planning it all out before starting coding since I don't want this to just be a one off for reach feature. I want there to eventually be a few (not necessarily just one) well scoped values to choose from.

Pull example plugins from sonobuoy repo

There are a few example plugins in vmware-tanzu/sonobuoy, those should be moved here so we can keep them centralized and focus plugin efforts in just one place.

systemd-logs refresh

The build process for this still uses manifesttool and qemu rather than the docker buildx which simplifies a lot of the scripts.

Would be worthwhile to give this a refresh for maintainability.

Plugin-skeleton: recorded demo

I think it would be helpful to have a recorded (audio+visual) demo of how to use the plugin skeleton. Just how to do hello world effectively to show 1 test result.

Failed to find /usr/bin/grep

Hello, I'm trying to run sonobuoy cis-benchmarks plugin with this command:

sonobuoy run --plugin https://raw.githubusercontent.com/vmware-tanzu/sonobuoy-plugins/master/cis-benchmarks/kube-bench-plugin.yaml --plugin https://raw.githubusercontent.com/vmware-tanzu/sonobuoy-plugins/master/cis-benchmarks/kube-bench-master-plugin.yaml --wait

It failed at all items using '/bin/ps -ef | grep ... | grep -v grep'. Then I ran 'kube-bench master -v 5 --version 1.15' inside the sonobuoy-kube-bench-master-daemon-set (by adding a long 'sleep' in the container command in the kube-bench-master-plugin.yaml), and saw this error:

failed to run: /bin/ps -ef | grep kube-scheduler | grep -v grep, command: [grep -v grep], error: fork/exec /usr/bin/grep: no such file or directory

After removed the /usr/bin/ mount https://github.com/vmware-tanzu/sonobuoy-plugins/blob/master/cis-benchmarks/kube-bench-master-plugin.yaml#L57-L60, the error is gone. It's because the /usr/bin/grep in my master node VM doesn't work well inside the kube-bench container, and kube-bench uses the relative path to find 'grep' (https://github.com/aquasecurity/kube-bench/blob/master/cfg/cis-1.5/master.yaml#L919)

I think can remove the /usr/bin/ mount in kube-bench-master-plugin.yaml since '--version' is already set. We can also file a bug for https://github.com/aquasecurity/kube-bench to replace all 'grep' with '/bin/grep'.

BTW the latest tag of aquasec/kube-bench container is 0.2.2, and it's better to add a note in README.md to tell users to update '--version' in kube-bench-master-plugin.yaml.

Thanks!

Port to other languages

The plugin helper makes it easy to write updates and status updates back to the sidecar worker. We have this as a go library, but it could be really convenient to have this functionality exposed for various languages since plugins can be written in anything.

Ticket originated from discussion in slack wanting to know how to do this. It also came up in the context of the cmd-runner plugin which is written in bash.

Conformance scan on TKGm on Azure shows 20/275 failures

v1.19.1+vmware.2 version cluster on TKG (m) in an Azure environment.

Most of these look like timeouts waiting on a webhook.

/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Oct 29 22:33:58.015: waiting for webhook configuration to be ready
Unexpected error:
<*errors.errorString | 0xc0000b9fd0>:

{ s: "timed out waiting for the condition", }
timed out waiting for the condition
occurred
/workspace/anago-v1.18.3-beta.0.58+d6e40f410ca91c/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1055

Full scan results attached.
diagnostics-tkgm-azure.tar.gz

  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
  • [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
  • [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
  • [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
  • [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
  • [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]

Update kube-bench

Currently on v0.6.3 and missing:

  • v0.6.4
  • v0.6.5
  • v0.6.6

Just merged a new feature to allow passing of 'skip' values so we should go ahead and build/push.

For the v0.6.3 (current version) we just wont have the skip functionality.

Post-processing: Done file specification with Sonobuoy

Some plugins, such as the upstream e2e plugin, are hardcoded to write the done file as soon as they are done. If we want to support post-processing, the done file needs to be handled differently.

I had a PR for making it customizable (vmware-tanzu/sonobuoy#1532) but this was just so that I could keep moving forward. I'm not certain if that is the best approach.

Ultimately whatever is easiest for the consumer is what I think is best. Just need to reflect on this before implementing.

requirements-check: Review TAP

TAP has a number of checks that could potentially be automated. Investigate and make a list of checks with high likelihood of reuse.

Create better tags for issues

For the backlog I've been just adding the plugin name in the title; it would be easier to organize if we had tags for each.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.