Giter Club home page Giter Club logo

kube-bench's Introduction

GitHub Release Downloads Docker Pulls Go Report Card Build Status License Coverage Status

kube-bench logo

kube-bench is a tool that checks whether Kubernetes is deployed securely by running the checks documented in the CIS Kubernetes Benchmark.

Tests are configured with YAML files, making this tool easy to update as test specifications evolve.

Kubernetes Bench for Security

CIS Scanning as part of Trivy and the Trivy Operator

Trivy, the all in one cloud native security scanner, can be deployed as a Kubernetes Operator inside a cluster. Both, the Trivy CLI, and the Trivy Operator support CIS Kubernetes Benchmark scanning among several other features.

Quick start

There are multiple ways to run kube-bench. You can run kube-bench inside a pod, but it will need access to the host's PID namespace in order to check the running processes, as well as access to some directories on the host where config files and other files are stored.

The supplied job.yaml file can be applied to run the tests as a job. For example:

$ kubectl apply -f job.yaml
job.batch/kube-bench created

$ kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
kube-bench-j76s9   0/1     ContainerCreating   0          3s

# Wait for a few seconds for the job to complete
$ kubectl get pods
NAME                      READY   STATUS      RESTARTS   AGE
kube-bench-j76s9   0/1     Completed   0          11s

# The results are held in the pod's logs
kubectl logs kube-bench-j76s9
[INFO] 1 Master Node Security Configuration
[INFO] 1.1 API Server
...

For more information and different ways to run kube-bench see documentation

Please Note

  1. kube-bench implements the CIS Kubernetes Benchmark as closely as possible. Please raise issues here if kube-bench is not correctly implementing the test as described in the Benchmark. To report issues in the Benchmark itself (for example, tests that you believe are inappropriate), please join the CIS community.

  2. There is not a one-to-one mapping between releases of Kubernetes and releases of the CIS benchmark. See CIS Kubernetes Benchmark support to see which releases of Kubernetes are covered by different releases of the benchmark.

By default, kube-bench will determine the test set to run based on the Kubernetes version running on the machine.

Contributing

Kindly read Contributing before contributing. We welcome PRs and issue reports.

Roadmap

Going forward we plan to release updates to kube-bench to add support for new releases of the CIS Benchmark. Note that these are not released as frequently as Kubernetes releases.

kube-bench's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-bench's Issues

Automatically determine the executables and file locations

We should be able to do a better job of figuring out the executables and config file locations automatically.

Instead of reading the binary names out of the config file, we could examine ps output to see what's actually running. For executables, we can look for

  • kubelet
  • kube-proxy or hyperkube proxy
  • kube-apiserver or hyperkube apiserver or apiserver
  • kube-controller or hyperkube controller-manager or controller-manager
  • kube-scheduler or hyperkube scheduler or scheduler
  • kube-federation-apiserver or hyperkube federation-apiserver or federation-apiserver
  • kube-federation-controller-manager or hyperkube federation-controller-manager or federation-controller-manager

For config files we can similarly look in possible places

  • /etc/kubernetes/kubelet or /etc/systemd/system/kubelet.service (there may well be other options)
  • /etc/kubernetes/addons/kube-proxy-daemonset.yaml or /etc/kubernetes/proxy
  • /etc/kubernetes/manifests/kube-apiserver.yaml or /etc/kubernetes/apiserver
  • /etc/kubernetes/manifests/kube-controller-manager.yaml or /etc/kubernetes/apiserver
  • /etc/kubernetes/manifests/kube-scheduler.yaml or /etc/kubernetes/scheduler
  • /etc/kubernetes/manifests/kube-federation-apiserver.yaml or /etc/kubernetes/federation-apiserver
  • /etc/kubernetes/manifests/kube-federation-controller-manager.yaml or /etc/kubernetes/federation-controller-manager

The auto-detection could be overridden by config file settings from config.yaml

Version checks using kubectl

It seems to be very common that the user won't have all the kubernetes executables in their path (e.g. kube-apiserver) but that they can still query the running version using kubectl, so this would be a better approach.

kube-bench should detect node type automatically

We should be able to automatically determine the tests to run on a node:

  • run the node tests on all nodes
  • run the master tests if any of kube-api-server, kube-controller-manager or kube-scheduler are running (including checking alternative names for these components on different platforms, as specified in the config)
  • run the federation tests if any of federation-apiserver or federation-controller-manager are running

Ability to ignore etcd specific configuration

Firstly, amazingly useful project! Thanks so much!

I'm using kubeadm to generate my configuration, and also using an external etcd.

I can ignore the etcd specific components using the group configuration:

./kube-bench master --group="1.1,1.2,1.3,1.4,1.6,1.7"

I have also removed etcd and flanneld from cfg/config.yaml

However, the configuration section is still showing failures because the etcd manifest doesn't exist:

[FAIL] 1.4.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored)
[FAIL] 1.4.8 Ensure that the etcd pod specification file ownership is set to root:root (Scored)

Is it actually possible to ignore this check by knowing it's etcd based and skipping it?

Info and warnings messing up the JSON output

kube-bench currently issues info output about the config file it's using, as well as warnings about config files that can't be found, or when it doesn't have executables in its path to check version number. These warnings need to be omitted when the output format is JSON.

[WARN] kube-apiserver: command not found on path - version check skipped
[WARN] kube-scheduler: command not found on path - version check skipped
[WARN] kube-controller-manager: command not found on path - version check skipped
[WARN] config file /etc/kubernetes/apiserver does not exist
[WARN] config file /etc/kubernetes/scheduler does not exist
[WARN] config file /etc/kubernetes/controller-manager does not exist
[WARN] config file /etc/kubernetes/config does not exist
[WARN] config file /etc/etcd/etcd.conf does not exist
[WARN] config file /etc/sysconfig/flanneld does not exist
{"ID":"1","Text":"Master Node Security Configuration","Type":"master","Groups":[{"ID":"1.1","Text":"API Server","Checks":[{"id":"1.1.1","Text":"Ensure that the --allow-privileged argument is set to false (Scored)","Remediation":"Edit the /etc/kubernetes/config file on the master node and set the KUBE_ALLOW_PRIV parameter to \"--allow-privileged=false\"","State":"FAIL"},{"id":"1.1.2","Text":"Ensure that the --anonymous-auth argument is set to false (Scored)","Remediation":"Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS parameter to \"--anonymous-auth=false\"","State":"FAIL"},{"id":"1.1.3","Text":"Ensure that the --basic-auth-file argument is not set (Scored)","Remediation":"Follow the documentation and configure alternate mechanisms for authentication. Then, edit the /etc/kubernetes/apiserver file on the master node and remove the \"--basic-auth-file=\u003cfilename\u003e\" argument from the KUBE_API_ARGS parameter.","State":"PASS"},{"id":"1.1.4","Text":"Ensure that the --insecure-allow-any-token argument is not set (Scored)","Remediation":"Edit the /etc/kubernetes/apiserver file on the master node and remove the --insecure-allow-any-token argument from the KUBE_API_ARGS parameter.","State":"PASS"},{"id":"1.1.5","Text":"Ensure that the --kubelet-https argument is set to true (Scored)","Remediation":"Edit the /etc/kubernetes/apiserver file on the master node and remove the --kubelet-https argument from the KUBE_API_ARGS parameter.","State":"FAIL"},{"id":"1.1.6","Text":"Ensure that the --insecure-bind-address argum...```

getkubeversion fails

./kube-bench master -c 1.4.11 -v1

I1004 15:34:01.582656   20082 util.go:198] executable 'etcd' not running
I1004 15:34:01.629281   20082 util.go:198] executable 'flanneld' not running
[WARN] Kubernetes version check skipped, with error getting kubectl version
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x784201]

goroutine 1 [running]:
github.com/aquasecurity/kube-bench/cmd.runChecks(0x858e95, 0x6)
        /home/niinai/go/src/github.com/aquasecurity/kube-bench/cmd/common.go:81 +0x201
github.com/aquasecurity/kube-bench/cmd.glob..func2(0xa5b880, 0xc420017cb0, 0x0, 0x3)
        /home/niinai/go/src/github.com/aquasecurity/kube-bench/cmd/master.go:28 +0x36
github.com/spf13/cobra.(*Command).execute(0xa5b880, 0xc420017bc0, 0x3, 0x3, 0xa5b880, 0xc420017bc0)
        /home/niinai/go/src/github.com/spf13/cobra/command.go:648 +0x231
github.com/spf13/cobra.(*Command).ExecuteC(0xa5bcc0, 0x7cc000, 0x0, 0x0)
        /home/niinai/go/src/github.com/spf13/cobra/command.go:734 +0x339
github.com/spf13/cobra.(*Command).Execute(0xa5bcc0, 0xa7e600, 0x0)
        /home/niinai/go/src/github.com/spf13/cobra/command.go:693 +0x2b
github.com/aquasecurity/kube-bench/cmd.Execute()
        /home/niinai/go/src/github.com/aquasecurity/kube-bench/cmd/root.go:52 +0x9b
main.main()
        /home/niinai/go/src/github.com/aquasecurity/kube-bench/main.go:22 +0x20

In this case context was not set.

Binaries for new release

Hi,
Could you put binaries for the latest release? I know I can build it from source but with previous release (0.0.11) there was binaries and in my dockerfile I just have curl with github url.

Build from scratch image

It would be better if the distributed Docker image were built from scratch rather than a Golang image.

The controls for master - admission control showing wrong status

For master , controls 1.1.11,1.1.12,1.1.13,1.1.14,1.1.15,1.1.25,1.1.28,1.1.33 showing wrong status.

example -

 # ./kube-bench master -c 1.1.11,1.1.12,1.1.13,1.1.14,1.1.15,1.1.25,1.1.28,1.1.33
Unexpected Client version 1.6
Unexpected Server version 1.6
[INFO] 1 Master Node Security Configuration
[INFO] 1.1 API Server
[PASS] 1.1.11 Ensure that the admission control policy is not set to AlwaysAdmit (Scored)
[FAIL] 1.1.12 Ensure that the admission control policy is set to AlwaysPullImages (Scored)
[FAIL] 1.1.13 Ensure that the admission control policy is set to DenyEscalatingExec (Scored)
[FAIL] 1.1.14 Ensure that the admission control policy is set to SecurityContextDeny (Scored)
[PASS] 1.1.15 Ensure that the admission control policy is set to NamespaceLifecycle (Scored)
[FAIL] 1.1.25 Ensure that the admission control policy is set to PodSecurityPolicy (Scored)
[FAIL] 1.1.28 Ensure that the admission control policy is set to ServiceAccount (Scored)
[FAIL] 1.1.33 Ensure that the admission control policy is set to NodeRestriction (Scored)

== Remediations ==
1.1.12 Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_ADMISSION_CONTROL parameter to "--admission-control=...,AlwaysPullImages,..."
1.1.13 Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_ADMISSION_CONTROL parameter to "--admission-control=...,DenyEscalatingExec,..."
1.1.14 Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_ADMISSION_CONTROL parameter to "--admission-control=...,SecurityContextDeny,..."
1.1.25 Follow the documentation and create Pod Security Policy objects as per your environment. Then, edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_ADMISSION_CONTROL parameter to "--admission-control=...,PodSecurityPolicy,..."
1.1.28 Follow the documentation and create ServiceAccount objects as per your environment. Then, edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_ADMISSION_CONTROL parameter to "--admissioncontrol=...,ServiceAccount,..."
1.1.33 Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. Then, edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_ADMISSION_CONTROL parameter to "--admissioncontrol=...,NodeRestriction,..."

== Summary ==
2 checks PASS
6 checks FAIL
0 checks WARN

Command output -

 # systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2017-09-12 03:21:32 EDT; 56min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2733 (hyperkube)
    Tasks: 21
   Memory: 134.6M
      CPU: 42.538s
   CGroup: /system.slice/kube-apiserver.service
           └─2733 /usr/bin/hyperkube apiserver --logtostderr=true --v=0 --etcd-servers=https://<MASKED_IP>:2379 --insecure-bind-address=127.0.0.1 --allow-privileged=true --service-cluster-ip-range=10.3.0.0/24 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,AlwaysPullImages,DenyEscalatingExec,SecurityContextDeny,PodSecurityPolicy --anonymous-auth=false --profiling=false --repair-malformed-updates=false --advertise-address=10.88.51.70 --bind-address=0.0.0.0 --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/client.pem --etcd-keyfile=/etc/kubernetes/ssl/client-key.pem --runtime-config=extensions/v1beta1=true,extensions/v1beta1/networkpolicies=true,batch/v2alpha1 --secure-port=4443 --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem

OS and kube version :

vm70:/home/junedm/kube-bench # kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"$Format:%H$", GitTreeState:"not a git tree", BuildDate:"2017-07-23T22:07:37Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"$Format:%H$", GitTreeState:"not a git tree", BuildDate:"2017-07-23T22:07:37Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
vm70:/home/junedm/kube-bench # uname -a
Linux vm70 4.4.21-69-default #1 SMP Tue Oct 25 10:58:20 UTC 2016 (9464f67) x86_64 x86_64 x86_64 GNU/Linux
vm70:/home/junedm/kube-bench # lsb_release -a
LSB Version:    n/a
Distributor ID: SUSE
Description:    SUSE Linux Enterprise Server 12 SP2
Release:        12.2
Codename:       n/a
vm70:/home/junedm/kube-bench #

Wrong conversion for node 2.1.6 Ensure that the --streaming-connection-idle-timeout argument is not set to 0

The check in 2.1.6 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Scored) is broken. Kubernetes expect the parameter to be a string in the form of ValueType like 1h 5m 10d etc as we can see in the documentation:

Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m' (default 4h0m0s)

However, the scripts expects a int value, which obviously fails:
error converting 1h: strconv.Atoi: parsing "1h": invalid syntax

Add config settings for OpenShift

Openshift uses different binaries and config file locations, and I'm opening this issue to gauge interest in adding Openshift config to kube-bench.

OpenShift has its own approach to security, and doesn't directly expose a lot of the config arguments of kube-bench making a lot of the CIS Benchmark tests seem mostly irrelevant.

However, "all kubelet settings that have corresponding command-line flags can already be set using the kubeletArguments map in the node config file, with the caveat that it can result in insecure, untested, and/or invalid configurations".

It would also be possible to at least check for things like file permissions.

  • Config files live in /etc/origin/master and /etc/origin/node directories (for master & node respectively)
  • Executables are openshift start node and openshift start master

container failed to start - "`pwd`" includes invalid characters

[root@k8s32-vm0 kube-bench]# docker run --rm -v `pwd`:/host aquasec/kube-bench:latest
Unable to find image 'aquasec/kube-bench:latest' locally
Trying to pull repository docker.io/aquasec/kube-bench ...
latest: Pulling from docker.io/aquasec/kube-bench
ef0380f84d05: Extracting [==================================================>] 52.57 MB/52.57 MB
24c170465c65: Download complete
4f38f9d5c3c0: Download complete
d36744f83dc1: Download complete
107ef86d710c: Download complete
ef0380f84d05: Pull complete
24c170465c65: Pull complete
4f38f9d5c3c0: Pull complete
d36744f83dc1: Pull complete
107ef86d710c: Pull complete
b789525fd509: Pull complete
633896968756: Pull complete
f68634ee500b: Pull complete
e8bd062c9b27: Pull complete
d1516e5a487b: Pull complete
78a9aed077ca: Pull complete
65230781dafa: Pull complete
9606ea51ca81: Pull complete
Digest: sha256:20f2c34568ab9a889a923b9925d1c547532a6e71d583eb613353c1b98b1ff9f0
/usr/bin/docker-current: Error response from daemon: create pwd: "pwd" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
See '/usr/bin/docker-current run --help'.

Aqua Security on Aws ubuntu Kubernetes cluster

Hello, I have created a kubernetes cluster using AWS Ubuntu, and i've run the aquasecurity kube bench to run the security check but i'm getting an error "need apiserver executable but none of the candidates are running". Can you please guide me on how to resolve the issue.

Fails to scan kops created clusters.

kops deploys most of the kubernetes binaries inside docker containers. So the output looks like this:

$ ./kube-bench master
[INFO] Using config file: {A PATH}/config.yaml
[WARN] kube-apiserver: command not found on path - version check skipped
[WARN] kube-scheduler: command not found on path - version check skipped
[WARN] kube-controller-manager: command not found on path - version check skipped
[WARN] config file /etc/kubernetes/apiserver does not exist
[WARN] config file /etc/kubernetes/scheduler does not exist
[WARN] config file /etc/kubernetes/controller-manager does not exist
[WARN] config file /etc/kubernetes/config does not exist
[WARN] config file /etc/etcd/etcd.conf does not exist
[WARN] config file /etc/sysconfig/flanneld does not exist
yaml: line 61: did not find expected key

Please let me know if I can help with further triage

Allow kubernetes version to be manually specified

Automatically determining the k8s version is a very cool feature, but in my case it'd be easier for me to simply specify it than set up the workarounds to get it to function. I imagine others wouldn't mind the ability to manually specify it as well.

Latest installation fails

Running the installation doesn't seem to do much. It only generates an empty cfg folder:

$ docker run --rm -v `pwd`:/host aquasec/kube-bench:latest
Unable to find image 'aquasec/kube-bench:latest' locally
latest: Pulling from aquasec/kube-bench
b56ae66c2937: Pull complete 
1963e5f576aa: Pull complete 
ee1b6cb1f583: Pull complete 
729afdc9bc53: Pull complete 
Digest: sha256:896995e04785b95fe45de7e6207f68b7f207a461d169e27db8c8ebad2f2632d0
Status: Downloaded newer image for aquasec/kube-bench:latest
cp: can't stat './kube-bench/cfg/*': Not a directory
===============================================
kube-bench is now installed on your host       
Run ./kube-bench to perform a security check   
===============================================
cp: can't stat './kube-bench/kube-bench': Not a directory

$ tree
.
└── cfg

1 directory, 0 files

Add -installation <type> option to select between different installation types

A new -installation flag to choose the default config file and executable names for different installation types.

For example,

  • kube-bench master -installation kops will look for config files and executables in the default locations installed by kops for a master node (see issue #7)
  • kube-bench node -installation hyperkube looks for the defaults installed by hyperkube (see issue #10).
  • kube-bench node -installation kubeadm looks for the defaults installed by kubeadm.

Using the flag means kube-bench picks from a set of default configurations to be defined in config.yaml. The user can override the defaults by modifying config.yaml.

use glide or godep for dependencies

I can't see any method on this repo for adding and installing depedencies at the moment, so building the app is more difficult than it needs to be.

Simply doing glide init and glide i helped, but wanted to get an idea of what you guys wanted before sending a PR

Failing to run tests if binaries are configured with more than one word

For example if $apiserverbin is set to hyperkube apiserver we get the following:

./kube-bench master -v -c "1.1.1" --installation hyperkube
hyperkube apiserver: command not found in path, error: exec: "hyperkube apiserver": executable file not found in $PATH
hyperkube scheduler: command not found in path, error: exec: "hyperkube scheduler": executable file not found in $PATH
hyperkube controller-manager: command not found in path, error: exec: "hyperkube controller-manager": executable file not found in $PATH
hyperkube apiserver: command not found on path - version check skipped, error: exec: "hyperkube apiserver": executable file not found in $PATH
failed to run:[hyperkube apiserver --version], error: exec: "hyperkube apiserver": executable file not found in $PATH

failed to run: ps -ef | grep hyperkube apiserver | grep -v grep
failed command:[ps -ef], error: write |1: broken pipe
failed to run: ps -ef | grep hyperkube apiserver | grep -v grep
failed command:[grep hyperkube apiserver], error: exit status 2
failed to run: ps -ef | grep hyperkube apiserver | grep -v grep
failed command:[grep -v grep], error: exit status 1

[INFO] Using config file: /home/azureuser/cfg/config.yaml
[WARN] config file  does not exist
[WARN] hyperkube apiserver unsupported version
[INFO] 1 Master Node Security Configuration
[INFO] 1.1 API Server
[FAIL] 1.1.1 Ensure that the --allow-privileged argument is set to false (Scored)

== Remediations ==
1.1.1 Edit the /etc/kubernetes/manifests/kube-apiserver.yaml file on the master node and set the KUBE_ALLOW_PRIV parameter to "--allow-privileged=false"

== Summary ==
0 checks PASS
1 checks FAIL
0 checks WARN

The first set of warnings are about not having the executable in the path - this issue is about the "failed command" errors related to the actual tests.

Support hyperkube

Installed a new k8s cluster on Azure using the az acs CLI (as described here) and it's using hyperkube. This means the executable names are not quite as expected in the CIS Benchmark as the running processes are hyperkube <component> instead of kube-<component> - the output from ps below makes this more clear.

kube-bench should detect that it's hyperkube running (by running ps) and look for hyperkube as the executable it's looking for in different tests.

On the master node, output from ps -eaf | grep kube:

azureuser@k8s-master-xxxxxxxxx-0:~$ ps -eaf | grep kube
root      1622     1  0 08:35 ?        00:00:00 /usr/bin/docker run --net=host --pid=host --privileged --rm --volume=/dev:/dev --volume=/sys:/sys:ro --volume=/var/run:/var/run:rw --volume=/var/lib/docker/:/var/lib/docker:rw --volume=/var/lib/kubelet/:/var/lib/kubelet:shared --volume=/var/log:/var/log:rw --volume=/etc/kubernetes/:/etc/kubernetes:ro --volume=/srv/kubernetes/:/srv/kubernetes:ro gcrio.azureedge.net/google_containers/hyperkube-amd64:v1.6.6 /hyperkube kubelet --kubeconfig=/var/lib/kubelet/kubeconfig --require-kubeconfig --pod-infra-container-image=gcrio.azureedge.net/google_containers/pause-amd64:3.0 --address=0.0.0.0 --allow-privileged=true --enable-server --enable-debugging-handlers --pod-manifest-path=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --register-schedulable=false --node-labels=role=master --cloud-provider=azure --cloud-config=/etc/kubernetes/azure.json --azure-container-registry-config=/etc/kubernetes/azure.json --hairpin-mode=promiscuous-bridge --network-plugin=kubenet --v=2
root      1805  1770  2 08:36 ?        00:01:36 /hyperkube kubelet --kubeconfig=/var/lib/kubelet/kubeconfig --require-kubeconfig --pod-infra-container-image=gcrio.azureedge.net/google_containers/pause-amd64:3.0 --address=0.0.0.0 --allow-privileged=


true --enable-server --enable-debugging-handlers --pod-manifest-path=/etc/kubernetes/manifests --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --register-schedulable=false --node-labels=role=master --cloud-provider=azure --cloud-config=/etc/kubernetes/azure.json --azure-container-registry-config=/etc/kubernetes/azure.json --hairpin-mode=promiscuous-bridge --network-plugin=kubenet --v=2
root      2047  2030  0 08:36 ?        00:00:04 /hyperkube scheduler --kubeconfig=/var/lib/kubelet/kubeconfig --leader-elect=true --v=2
root      2155  2138  1 08:36 ?        00:00:59 /hyperkube controller-manager --kubeconfig=/var/lib/kubelet/kubeconfig --allocate-node-cidrs=True --cluster-cidr=10.244.0.0/16 --cluster-name=liz-k8s-cluster-liz-k8s-rg-49b47d --cloud-provider=azure --cloud-config=/etc/kubernetes/azure.json --root-ca-file=/etc/kubernetes/certs/ca.crt --service-account-private-key-file=/etc/kubernetes/certs/apiserver.key --leader-elect=true --v=2
root      2177  2160  1 08:36 ?        00:01:02 /hyperkube apiserver --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --address=0.0.0.0 --allow-privileged --insecure-port=8080 --secure-port=443 --cloud-provider=azure --cloud-config=/etc/kubernetes/azure.json --service-cluster-ip-range=10.0.0.0/16 --etcd-servers=http://127.0.0.1:2379 --etcd-quorum-read=true --advertise-address=10.240.255.5 --tls-cert-file=/etc/kubernetes/certs/apiserver.crt --tls-private-key-file=/etc/kubernetes/certs/apiserver.key --client-ca-file=/etc/kubernetes/certs/ca.crt --service-account-key-file=/etc/kubernetes/certs/apiserver.key --storage-backend=etcd2 --v=4
root      2278  2263  0 08:36 ?        00:00:00 /bin/bash /opt/kube-addons.sh
root      2477  2462  0 08:36 ?        00:00:07 /hyperkube proxy --kubeconfig=/var/lib/kubelet/kubeconfig --cluster-cidr=10.244.0.0/16
azureus+  8538  8069  0 09:32 pts/0    00:00:00 grep --color=auto kube

Kube-bench deployment in ubuntu

When installing kube-bench in ubuntu, using the "docker" installation, the kube-bench is installed in a weird directroy inside the "/var" directory, makes the executable not to recognize the config file.
I would suggest to install the kube-bench in the "~" dicretory like before

Add v1.8 support

Kubernetes v1.8 is now released 🎉, let's get some YAML profiles for this fancy new version as well 😄

Allow for etcd options specified via environment variable

I get failures like

1.5.1 Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the
master node and set the below parameters.
--ca-file=</path/to/ca-file>
--key-file=</path/to/key-file>

1.5.2 Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master
node and set the below parameter.
--client-cert-auth="true"

However (via kops) I'm setting these via environment variable:

    - name: ETCD_CERT_FILE
      value: /srv/kubernetes/etcd.pem
    - name: ETCD_KEY_FILE
      value: /srv/kubernetes/etcd-key.pem

Kops code: https://github.com/kubernetes/kops/blob/37d4b53d0d5507025ef0bf89dcf13e62a355d1c0/protokube/pkg/protokube/etcd_manifest.go#L186

Support multiple releases of Kubernetes (including 1.6)

Not everyone is running 1.7 yet, and in the future no doubt there will be folks running on 1.7 for some time after 1.8 is released, and so on. We should support running the right set of tests for the version that we detect.

We could do this with new optional fields for each test in the YAML

  • retired-in: - if this is set, don't load this test if the detected release >= specified release number
  • added-in: - if this is set, don't load this test if the detected release < specified release number

To support kubernetes 1.6 we need to identify the new tests for 1.7 and add added-in: 1.7 for them, and also add back any tests we removed, giving them retired-in: 1.7

Segmentation violation

Got segmentation violation when running kube-bench as root.
centos:7
Kubernetes 1.8.1

Note that I am only getting this when running as root. It does work when running as non-root user.

[root@ec2-user]# ./kube-bench master
[WARN] Missing config file for flanneld
[WARN] Missing config file for kubernetes
[WARN] Missing config file for apiserver
[WARN] Missing config file for scheduler
[WARN] Missing config file for controllermanager
[WARN] Missing config file for etcd
[WARN] Missing config file for flanneld
[WARN] Missing config file for kubernetes
[WARN] Kubernetes version check skipped, with error getting kubectl version
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8153c5]

goroutine 1 [running]:
github.com/aquasecurity/kube-bench/cmd.runChecks(0x90268a, 0x6)
/go/src/github.com/aquasecurity/kube-bench/cmd/common.go:65 +0x285
github.com/aquasecurity/kube-bench/cmd.glob..func2(0xb40400, 0xb642b0, 0x0, 0x0)
/go/src/github.com/aquasecurity/kube-bench/cmd/master.go:28 +0x36
github.com/spf13/cobra.(*Command).execute(0xb40400, 0xb642b0, 0x0, 0x0, 0xb40400, 0xb642b0)
/go/src/github.com/spf13/cobra/command.go:702 +0x2bd
github.com/spf13/cobra.(*Command).ExecuteC(0xb40840, 0x865c00, 0x0, 0x0)
/go/src/github.com/spf13/cobra/command.go:783 +0x349
github.com/spf13/cobra.(*Command).Execute(0xb40840, 0xb642b0, 0x0)
/go/src/github.com/spf13/cobra/command.go:736 +0x2b
github.com/aquasecurity/kube-bench/cmd.Execute()
/go/src/github.com/aquasecurity/kube-bench/cmd/root.go:53 +0x9b
main.main()
/go/src/github.com/aquasecurity/kube-bench/main.go:22 +0x20

Warning about blank config file name

Running ./kube-bench master I get the following warnings:

[WARN] config file /etc/kubernetes/apiserver does not exist
[WARN] config file /etc/kubernetes/scheduler does not exist
[WARN] config file does not exist

Note that the last one has no config file specified.

If I specify the correct installation with ./kube-bench master --installation kubeadm I still get the warning about the blank config file:

[WARN] config file does not exist

etcd tests expects flags to be set with equal sign

We currently run etcd using flags without the equal sign:

For example, 1.4.11 will fail with this etcd flag style

etcd --data-dir /var/vcap/store/etcd

It looks like the tests assume that the equal sign is always set:

audit: "ps -ef | grep $etcdbin | grep -v grep | grep -o data-dir=.* | cut -d= -f2 | xargs stat -c %a"

Could the tests be updated to support both flag styles?

File permission and ownership checks giving warnings on success

Even if a config file has the right permissions and ownership I'm seeing warnings for relevant tests.

Also, we need to make sure that permissions checks work if the permissions are more restrictive e.g. if the test wants 644 or more restrictive, then 600 is OK.

Extracts from output:

[WARN] 2.2.3 Ensure that the kubelet file permissions are set to 644 or more restrictive (Scored)
[WARN] 2.2.4 Ensure that the kubelet file ownership is set to root:root (Scored)
...
2.2.3 Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 644 /etc/origin/node/node-config.yaml
2.2.4 Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root /etc/origin/node/node-config.yaml

But the file does exist and has good permissions and ownership

[root@ops215-vm0 ec2-user]# ls -l /etc/origin/node/node-config.yaml
-rw-r--r--. 1 root root 1167 Jul 24 16:16 /etc/origin/node/node-config.yaml

(In this case I've modified the $config file setting)

kube-bench assumes kubectl is available on node

kube-bench determines the kubernetes version via kubectl. This isn't available on our kops installed node instances. kubelet --version would return the node version. Alternatively kube-bench should gain an option to manually set the version.

kube-bench unable to get kubectl version

working on a 1.8.6 k8s cluster running on google.

kubectl version on my node is-

gke-gke13-default-pool-7e367c4e-sj0w ~ # kubectl version
Client Version: version.Info

{Major:"1", Minor:"8+", GitVersion:"v1.8.6-gke.0", GitCommit:"ee9a97661f14ee0b1ca31d6edd30480c89347c79", GitTreeState:"clean", BuildDate:"2018-01-05T03:38:14Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

error from kube-bench

[WARN] Unable to get kubectl version, using default version: 1.6
Reading 1.6 specific configuration file

Update tests to match the new release of the CIS Benchmark

New version v1.1.0 has been released with the following changes. We need to modify the test config files to match.

New Recommendations

1.1.32 Ensure that the --authorization-mode argument is set to Node

1.1.33 Ensure that the admission control policy is set to NodeRestriction

1.1.34 Ensure that the --experimental-encryption-provider-config argument is set as appropriate

1.1.35 Ensure that the encryption provider is set to aescbc

1.3.7 Ensure that the RotateKubeletServerCertificate argument is set to true

1.6.8 Configure Network policies as appropriate

2.1.14 Ensure that the RotateKubeletClientCertificate argument is set to true

2.1.15 Ensure that the RotateKubeletServerCertificate argument is set to true

2.2.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive

2.2.8 Ensure that the client certificate authorities file ownership is set to root:root

Deleted Recommendations

1.3.3 Ensure that the --insecure-experimental-approve-all-kubelet-csrs-for-group argument is not set

1.6.5 Avoid using Kubernetes Secrets

Misleading warning message if version check is skipped

If the executable(s) aren't on the path (pending issue #15) we skip the version check, but we get the following warning:

[WARN] kube-apiserver unsupported version

This is misleading - we should distinguish between warning that the version is incorrect and warning that the version check has been skipped.

ps usage is not portable

The current usage of ps to detect binaries is not portable. Busybox's ps (scroll way down) is POSIX-compatible, although the options are undocumented in the command help. Unfortunately -C, -o cmd, and --no-headers aren't part of the POSIX spec; rather they're are GNU-specific options, I think.

$ docker run --rm gcr.io/google-containers/kube-apiserver:v1.9.3 ps -C test
ps: invalid option -- C
BusyBox v1.28.0 (2018-01-16 23:29:21 UTC) multi-call binary.

Usage: ps [-o COL1,COL2=HEADER] [-T]

Show list of processes

	-o COL1,COL2=HEADER	Select columns for display
	-T			Show threads

Doesn't look like they're included in the BSD/OSX ps either, but that's a much smaller concern. (-C is related to CPU percentage calculation, not process name)

$ sw_vers
ProductName:	Mac OS X
ProductVersion:	10.13.1
BuildVersion:	17B48
$ man ps
[...]
SYNOPSIS
     ps [-AaCcEefhjlMmrSTvwXx] [-O fmt | -o fmt] [-G gid[,gid...]] [-g grp[,grp...]] [-u uid[,uid...]] [-p pid[,pid...]] [-t tty[,tty...]] [-U user[,user...]]
     ps [-L]
[...]

github.com/shirou/gopsutil may be useful as a portable solution.

use goreleaser for final builds

I've found goreleaser really improves the release process for golang tools:

https://goreleaser.com/

This way, there isn't a reliance on the Makefile and the docker installation method. Gorelease can build docker images as well in an automated way. If there's any interest here, I'm happy to send a PR.

Unnecessary warnings about missing config files

Running on 1.8 we expect to find pod spec YAML files for many components. However, the code assumes they will have config files so we get a number of unnecessary warnings e.g.

[ec2-user@k8s344-vm0 ~]$ ./kube-bench master
[WARN] Missing config file for flanneld
[WARN] Missing config file for kubernetes
[WARN] Missing config file for apiserver
[WARN] Missing config file for scheduler
[WARN] Missing config file for controllermanager
[WARN] Missing config file for etcd
[WARN] Missing config file for flanneld
[WARN] Missing config file for kubernetes

Handle errors from tests where executable not running

Some tests can generate errors:

failed to run: ps -ef | grep etcd | grep -v grep | grep -o data-dir=.* | cut -d= -f2 | xargs stat -c %a
failed command:[grep -o data-dir=.*], error: exit status 1
failed to run: ps -ef | grep etcd | grep -v grep | grep -o data-dir=.* | cut -d= -f2 | xargs stat -c %a
failed command:[xargs stat -c %a], error: exit status 123

We should be able to figure out what these errors mean for the test status, and give a better [WARN] or [FAIL] message as appropriate. For example, I guess these both happen if etcd isn't running? If that's the case, we should say so in a [WARN] and only show the detail seen above if the --verbose flag is on.

See more context for this in #7

Container installation failing

When running the container installation command docker run --rm -v pwd:/host aquasec/kube-bench:latest it fails to play the kube-bench binary on the host. See output below.

ubuntu@ip-10-1-1-215:~$ sudo docker run --rm -v `pwd`:/host aquasec/kube-bench:latest
cp: can't stat './kube-bench/cfg/*': Not a directory
cp: can't stat './kube-bench/kube-bench': Not a directory
===============================================
kube-bench is now installed on your host
Run ./kube-bench to perform a security check
===============================================
ubuntu@ip-10-1-1-215:~$ ls
cfg
ubuntu@ip-10-1-1-215:~$

this is image id dd0785160c94
digest: sha256:896995e04785b95fe45de7e6207f68b7f207a461d169e27db8c8ebad2f2632d0

Unexpected Client & Server messages on 1.6

I ran on a machine with 1.6 and got the following:

Unexpected Client version 1.6
Unexpected Server version 1.6

From an extremely quick look, I think this is because we always verify against kubeMajorVersion and kubeMinorVersion which are currently hard-coded to 1.7?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.