Giter Club home page Giter Club logo

vulnerability-operator's Introduction

vulnerability-operator

Scans SBOMs and Images for vulnerabilities

test

Overview

This operator scans all SBOMs from a git-repository for vulnerabilities using Grype. The result-list can be emitted as JSON-file served via an endpoint and/or as Prometheus metrics. There may be more targets in the future. The scans are done periodically.

Kubernetes Compatibility

The image contains versions of k8s.io/client-go. Kubernetes aims to provide forwards & backwards compatibility of one minor version between client and server:

vulnerability-operator k8s.io/{api,apimachinery,client-go} expected kubernetes compatibility
main v0.30.0 1.29.x, 1.30.x, 1.31.x
0.24.0 v0.30.0 1.29.x, 1.30.x, 1.31.x
0.23.0 v0.29.3 1.28.x, 1.29.x, 1.30.x
0.22.0 v0.28.4 1.27.x, 1.28.x, 1.29.x
0.19.0 v0.27.4 1.26.x, 1.27.x, 1.28.x
0.17.0 v0.26.3 1.25.x, 1.26.x, 1.27.x
0.13.0 v0.25.4 1.24.x, 1.25.x, 1.26.x
0.8.0 v0.24.3 1.23.x, 1.24.x, 1.25.x
0.5.0 v0.23.5 1.22.x, 1.23.x, 1.24.x

However, the operator will work with more versions of Kubernetes in general.

Installation

Manifests

kubectl apply -f deploy/

Helm-Chart

Create a YAML file first with the required configurations or use helm-flags instead.

helm repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
helm install ckotzbauer/vulnerability-operator -f your-values.yaml

Configuration

All parameters are cli-flags.

Parameter Required Default Description
verbosity false info Log-level (debug, info, warn, error, fatal, panic)
cron false @hourly Backround-Service interval (CRON). All options from github.com/robfig/cron are allowed
sources false git Comma-delimited list of sources to gather SBOMs from. Possible source currently only git
targets false json Comma-delimited list of targets to sent vulnerability-data to. Possible targets json, metrics, policyreport
grype-config-file false "" Path to grype-config-file to specify ignore-rules.
filter-config-file false "" Path to filter-config-file to specify ignore- and audit-rules. (yaml formatted)
only-fixed false false Only report CVEs where a fix is available.
min-severity false medium Only report CVEs with a severity greater or equal (negligible, low, medium, high, critical).
git-workingtree false /work Directory to place the git-repo.
git-repository true when git target is used. "" Git-Repository-URL (HTTPS).
git-branch false main Git-Branch to checkout.
git-path false "" Folder-Path inside the Git-Repository.
git-access-token false "" Git-Personal-Access-Token with read-permissions.
git-username false "" Git-Username
git-password false "" Git-Password
github-app-id false "" GitHub App-ID.
github-app-installation-id false "" GitHub App-Installation-ID.
reports-dir false /reports Directory to place the reports.

The flags can be configured as args or as environment-variables prefixed with VULN_ to inject sensitive configs as secret values.

Example Helm-Config

args:
  targets: metrics
  min-severity: low
  git-repository: https://github.com/XXX/XXX
  git-path: dev-cluster/sboms
  verbosity: debug
  cron: "0 0 * * * *"

envVars:
  - name: VULN_GIT_ACCESS_TOKEN
    valueFrom:
      secretKeyRef:
        name: "vulnerability-operator"
        key: "accessToken"

servicemonitor:
  enabled: true

Sources

Git

The contents of this git-repository are typically generated from the sbom-generator. All files named sbom.json, sbom.txt, sbom.xml or sbom.spdx are gathered regarding the git-* config-flags. You can use a token-based authentication (e.g. a PAT for GitHub) with --git-access-token, BasicAuth with username and password (--git-username, --git-password) or Github App Authentication (--github-app-id, --github-app-installation-id, env: VULN_GITHUB_APP_PRIVATE_KEY) The private-key has to be Base64 encoded.

Targets

JSON

All found vulnerabilities can be requested as file from the /reports/report.json endpoint. The data is structured like this:

Example JSON
[
  {
    "ID": "CVE-2019-19924",
    "Severity": "Medium",
    "Type": "rpm",
    "Package": "sqlite",
    "Installed": "3.7.17-8.el7_7.1",
    "FixedIn": [],
    "FixState": "wont-fix",
    "URLs": [
      "https://access.redhat.com/security/cve/CVE-2019-19924"
    ],
    "ImageID": "docker.elastic.co/beats/filebeat@sha256:e418d12e08a1b74140c9edc6bdc773110b0f802340e25e2716950bac86ae14ce",
    "Containers": [
      {
        "PodNamespace": "elastic-system",
        "PodName": "filebeat-filebeat-6xkf4",
        "ContainerName": "filebeat"
      },
      {
        "PodNamespace": "elastic-system",
        "PodName": "filebeat-filebeat-g6zbh",
        "ContainerName": "filebeat"
      },
      {
        "PodNamespace": "elastic-system",
        "PodName": "filebeat-filebeat-jkgnh",
        "ContainerName": "filebeat"
      }
    ]
  },
  {
    "ID": "CVE-2020-16250",
    "Severity": "Critical",
    "Type": "go-module",
    "Package": "github.com/hashicorp/vault/api",
    "Installed": "v1.3.1",
    "FixedIn": [],
    "FixState": "unknown",
    "URLs": [
      "https://www.hashicorp.com/blog/category/vault/",
      "https://github.com/hashicorp/vault/blob/master/CHANGELOG.md#151",
      "http://packetstormsecurity.com/files/159478/Hashicorp-Vault-AWS-IAM-Integration-Authentication-Bypass.html"
    ],
    "ImageID": "ghcr.io/kyverno/kyverno@sha256:4fc715e9287446222bf12b1245899b195ecea8beda54c6f6a3587373c376cad1",
    "Containers": [
      {
        "PodNamespace": "kyverno",
        "PodName": "kyverno-555dcf9f66-csmq5",
        "ContainerName": "kyverno"
      },
      {
        "PodNamespace": "kyverno",
        "PodName": "kyverno-555dcf9f66-gsphr",
        "ContainerName": "kyverno"
      }
    ]
  }
]

Metrics

Every CVE is exported with a Prometheus vuln_operator_cves gauge-metric for each container it appears in.

vuln_operator_cves{container_name="kyverno", cve="CVE-2020-16250", fix_state="unknown", image_id="ghcr.io/kyverno/kyverno@sha256:4fc715e9287446222bf12b1245899b195ecea8beda54c6f6a3587373c376cad1", package="github.com/hashicorp/vault/api", k8s_name="kyverno", k8s_namespace="kyverno", k8s_kind="Deployment", severity="Critical", type="go-module", version="v1.3.1"}

Note: The operator removes all metrics from the vector before re-populating it. In the meanwhile the data is not expressive.

Grafana Dashboard

There's a dashboard for Grafana to view the collected vulnerability metrics.

PolicyReport

With the policyreport target set, the operator stores the scan-results as PolicyReport CRs inside the cluster. Each PolicyReport object is owned by the corresponding pod to enable autocleanup by Kubernetes.

Example PolicyReport
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
  creationTimestamp: "2022-06-18T14:57:27Z"
  generation: 3
  labels:
    kubernetes.io/created-by: vulnerability-operator
  name: vuln-kyverno-688464bd95-55drc
  namespace: kyverno
  ownerReferences:
  - apiVersion: v1
    kind: Pod
    name: kyverno-688464bd95-55drc
    uid: 24bdccbb-653a-4005-8463-cf511a919037
  resourceVersion: "160903586"
  uid: e3cd1447-49ef-481d-b44b-cc38239ea870
results:
- category: github.com/theupdateframework/go-tuf
  message: 'github.com/theupdateframework/go-tuf: GHSA-66x3-6cw3-v5gj'
  policy: GHSA-66x3-6cw3-v5gj
  properties:
    FixedVersion: 0.3.0
    InstalledVersion: v0.0.0-20220211205608-f0c3294f63b9
    URL: https://github.com/advisories/GHSA-66x3-6cw3-v5gj
  resources:
  - kind: Pod
    name: kyverno-688464bd95-55drc
    namespace: kyverno
    uid: 24bdccbb-653a-4005-8463-cf511a919037
  result: fail
  severity: high
  source: vulnerability-operator
  timestamp:
    nanos: 728434599
    seconds: 1655564847
- category: google.golang.org/protobuf
  message: 'google.golang.org/protobuf: CVE-2015-5237'
  policy: CVE-2015-5237
  properties:
    FixedVersion: ""
    InstalledVersion: v1.28.0
    URL: https://github.com/google/protobuf/issues/760
  resources:
  - kind: Pod
    name: kyverno-688464bd95-55drc
    namespace: kyverno
    uid: 24bdccbb-653a-4005-8463-cf511a919037
  result: fail
  severity: high
  source: vulnerability-operator
  timestamp:
    nanos: 728434599
    seconds: 1655564847
- category: google.golang.org/protobuf
  message: 'google.golang.org/protobuf: CVE-2021-22570'
  policy: CVE-2021-22570
  properties:
    FixedVersion: ""
    InstalledVersion: v1.28.0
    URL: https://github.com/protocolbuffers/protobuf/releases/tag/v3.15.0
  resources:
  - kind: Pod
    name: kyverno-688464bd95-55drc
    namespace: kyverno
    uid: 24bdccbb-653a-4005-8463-cf511a919037
  result: fail
  severity: high
  source: vulnerability-operator
  timestamp:
    nanos: 728434599
    seconds: 1655564847
- category: google.golang.org/protobuf
  message: 'google.golang.org/protobuf: CVE-2015-5237'
  policy: CVE-2015-5237
  properties:
    FixedVersion: ""
    InstalledVersion: v1.28.0
    URL: https://github.com/google/protobuf/issues/760
  resources:
  - kind: Pod
    name: kyverno-688464bd95-55drc
    namespace: kyverno
    uid: 24bdccbb-653a-4005-8463-cf511a919037
  result: fail
  severity: high
  source: vulnerability-operator
  timestamp:
    nanos: 728434599
    seconds: 1655564847
- category: google.golang.org/protobuf
  message: 'google.golang.org/protobuf: CVE-2021-22570'
  policy: CVE-2021-22570
  properties:
    FixedVersion: ""
    InstalledVersion: v1.28.0
    URL: https://github.com/protocolbuffers/protobuf/releases/tag/v3.15.0
  resources:
  - kind: Pod
    name: kyverno-688464bd95-55drc
    namespace: kyverno
    uid: 24bdccbb-653a-4005-8463-cf511a919037
  result: fail
  severity: high
  source: vulnerability-operator
  timestamp:
    nanos: 728434599
    seconds: 1655564847
- category: github.com/theupdateframework/go-tuf
  message: 'github.com/theupdateframework/go-tuf: GHSA-66x3-6cw3-v5gj'
  policy: GHSA-66x3-6cw3-v5gj
  properties:
    FixedVersion: 0.3.0
    InstalledVersion: v0.0.0-20220211205608-f0c3294f63b9
    URL: https://github.com/advisories/GHSA-66x3-6cw3-v5gj
  resources:
  - kind: Pod
    name: kyverno-688464bd95-55drc
    namespace: kyverno
    uid: 24bdccbb-653a-4005-8463-cf511a919037
  result: fail
  severity: high
  source: vulnerability-operator
  timestamp:
    nanos: 728434599
    seconds: 1655564847
scope:
  kind: Pod
  name: kyverno-688464bd95-55drc
  namespace: kyverno
  uid: 24bdccbb-653a-4005-8463-cf511a919037
summary:
  error: 0
  fail: 6
  pass: 0
  skip: 0
  warn: 0

Context-based vulnerability filtering

You can apply filter rules to either ignore certain vulnerabilities or to move them into a separate metric. This is useful if you want to exclude vulnerabilities from your metric, for example false positives or issues that are outside of the applications execution path.

Filter rules support the following properties:

  • CVE
  • package name
  • container context (with the following subset)
    • image
    • namespace
    • kind
    • name

Container context properties support glob patterns * and ?, while CVE and package only match its exact value. The container context only applies, if all of its properties are matching (AND condition).

The filter rules are provided through a yaml formatted file. Its path is configured with --filter-config-file.

Example filter configuration
ignore:
  # Ignore any vulnerabilities in ruby gem rdoc
  - package: rdoc
  # Ignore CVE GHSA-8cr8-4vfw-mr7h
  - vulnerability: GHSA-8cr8-4vfw-mr7h
audit:
  # If GHSA-fp4w-jxhp-m23p was found in gitlab-ce images, move it the "audit" metric
  - vulnerability: GHSA-fp4w-jxhp-m23p
    context:
    - image: gitlab/gitlab-ce*
      namespace: "*"
      kind: "*"
      name: "*"
  # Move any CVE for the git package to the "audit" metric, if it was found in a gitlab-*-redis deployment
  - package: git
    context:
    - image: "*"
      namespace: "*"
      kind: Deployment
      name: gitlab-*-redis

Targets

For the Prometheus target, the separate metric vuln_operator_cves_audit contains matches from the audit section.

For the json target, a separate file audited.json is provided.

Security

The docker-image is based on a scratch-image to reduce the attack-surface and keep the image small. Furthermore the image and release-artifacts are signed with cosign and attested with provenance-files. The release-process satisfies SLSA Level 2. All of those "metadata files" are also stored in a dedicated repository ghcr.io/ckotzbauer/vulnerability-operator-metadata. Both, SLSA and the signatures are still experimental for this project. When discovering security issues please refer to the Security process.

Contributing

Please refer to the Contribution guildelines.

Code of conduct

Please refer to the Conduct guildelines.

vulnerability-operator's People

Contributors

actions-user avatar artsv79 avatar ckotzbauer avatar dependabot[bot] avatar muellerst-hg avatar nicholasdille avatar renovate-bot avatar renovate[bot] avatar samcornwell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

vulnerability-operator's Issues

Context-based vulnerability filtering

It might occur, that a CVE is a permanent false positive (e.g. rubygem rexml 3.2.3.1 in gitlab/gitlab-ce:15.5.6-ce.0) or a container is unaffected by a CVE.

In order to focus on the real issues, we'd like our view (Grafana consuming prometheus metric) to filter these CVEs. An idea was to have another label "audit_result" in the prometheus metric vuln_operator_cves.

How do you handle this?
In which ways could vulnerability_operator support audits?

cc/ @nicholasdille

feat: Optional output of grype and prometheus debug statements

While I was debugging an issue, I decided to enable the logging package in the grype module so that I could see if my sbom was actually being processed (I received no results initially, but resolved the issue -- see #386 for details). I wondered if you wanted me to submit a PR with this modification. I fed the verbosity argument into the grype logging module in order to do this. If this logging feature is desired, perhaps it would be better to have a new argument called --grype-verbosity or something similar because it is extremely verbose, and the logging may not be desired in most cases.

Let me know if:

  1. you would like me to submit a PR with this modification
  2. if "yes", do you want me to add a separate argument called --grype-verbosity or similar so that there is not a flood of output in the standard vulnerability-operator use case

Another thing is that I would like to attempt is to enable logging from the prometheus metrics module. I feel this will be useful to see when/if the metrics are being scraped and whatever other debug output the prometheus endpoint code can provide. Let me know if you think this would be something that you would want in the main code base. I can't guarantee this feature at this moment because I haven't looked into how easy or hard it is. The grype logging was non-trivial because of I've been learning Go along the way, and the Grype logging module makes you pass some "extra stuff" which turns out to be useless but necessary to have things operate properly.

BTW thanks for this project, it has filled a use case that I had initially started writing from scratch, but your features check a lot of boxes for what I want to accomplish, and after some tweaking it works beautifully. Also it has helped me to learn Go as I had to figure out why certain things were going wrong along the way to getting everything working :)

question: the purpose of audit vs ignore rules, and what is the purpose of the two different config files

I was wondering what is the intended purpose of audit rules and ignore rules? Me and my team have guessed that ignore rules are for false positives. We have also guessed that audit rules are for CVEs that we have reviewed and determined are true positives, but we have justification or some other reason that we have determined are not a threat. Is this true?

Also, what is the purpose of grype-config-file and filter-config-file? They seem to have the same general function, except grype-config-file is only for ignores. Does grype-config-file have the same format documented at https://github.com/anchore/grype#specifying-matches-to-ignore and is simply passed directly to the grype library, whereas the filter-config-file is filtered after the fact by the vulenerability-operator itself? If this is try, is there a reason to use one over the other for ignore rules?

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: Cannot find preset's package (github>ckotzbauer/renovate-config:default)

Build fails because of missing dependency

Running go mod tidy after cloning repo complains about opentelemetry package not containing required module

go: downloading go.opentelemetry.io/otel/metric v0.30.0
github.com/ckotzbauer/vulnerability-operator/internal/vuln/grype imports
	github.com/anchore/grype/grype/pkg imports
	github.com/sigstore/cosign/pkg/signature imports
	github.com/sigstore/cosign/pkg/cosign imports
	github.com/sigstore/cosign/cmd/cosign/cli/fulcio/fulcioverifier/ctl imports
	github.com/google/certificate-transparency-go imports
	go.etcd.io/etcd/v3 imports
	go.etcd.io/etcd/tests/v3/integration imports
	go.etcd.io/etcd/server/v3/embed imports
	go.opentelemetry.io/otel/semconv: module go.opentelemetry.io/otel@latest found (v1.7.0), but does not contain package go.opentelemetry.io/otel/semconv
github.com/ckotzbauer/vulnerability-operator/internal/vuln/grype imports
	github.com/anchore/grype/grype/pkg imports
	github.com/sigstore/cosign/pkg/signature imports
	github.com/sigstore/cosign/pkg/cosign imports
	github.com/sigstore/cosign/cmd/cosign/cli/fulcio/fulcioverifier/ctl imports
	github.com/google/certificate-transparency-go imports
	go.etcd.io/etcd/v3 imports
	go.etcd.io/etcd/tests/v3/integration imports
	go.etcd.io/etcd/server/v3/embed imports
	go.opentelemetry.io/otel/exporters/otlp imports
	go.opentelemetry.io/otel/sdk/metric/controller/basic imports
	go.opentelemetry.io/otel/metric/registry: module go.opentelemetry.io/otel/metric@latest found (v0.30.0), but does not contain package go.opentelemetry.io/otel/metric/registry

doc: document the required directory structure for git repo and the requirement that a container must be running on k8s cluster

Preamble:

When I first tried out installing this app, I manually generated a single sbom and put it in a repo. I was getting an empty results.json file. I hacked the code and enabled debugging output from the grype library. I found from the grype debug output that it was in fact running on my sbom and finding vulnerabilities, so I figured that the results were being filtered out for some reason. I ran the code through a debugger, and I found the function which extracted the "ImageID" from the file path, which puzzled me at first. It would always return ., which wouldn't match any containers in kubernetes. Then when I was looking at the sbom-operator, and saw the directory structure that it used. It then dawned on me that this directory structure was a requirement for vulnerability-operator to function correctly. My initial thought was that I should modify the code to extract the imageID from the sbom, but given the different sbom types and even differences between schemas of the same type (syft for example), this may take some effort to ensure this is done in a robust way.
Also it was not clear to me from the documentation that the results are also filtered out if there are no matching containers in the cluster. It would be nice at some point that the scanning of sboms could be decoupled from actively running containers in a cluster. For example, if I want to just scan all of the images in my registry that I have created sboms, or even just sboms that I have generated from a list of images that I have a particular interest in. I may open an issue for this at some point, but for the time being the functionality of scanning my cluster images is a very nice start.

Request:

Document that required file structure. Perhaps even just a link to the sbom-operator README.md indicating that this is the required file structure.

Document the fact that there must be containers using your scanned images in order for vulnerabilities to show up in your reports/metrics.

Initial thoughts and ideas

Sources

  • Load SBOMs from Git-Repository (previously created from sbom-operator)
  • Cron-Trigger (like sbom-operator)
  • Webhook-Trigger (e.g. called from sbom-operator)

Targets

  • Prometheus-Metrics (โš ๏ธ needs more specification)
  • Messaging (How to avoid sending the same messages for found CVEs on each scan?)
  • Report generation
    • READMEs
    • Web-Report served from vulnerability-operator itself or uploaded to a destination
    • JSON-Report served from vulnerability-operator itself
  • PolicyReport-CRDs (maybe there's a way to include this in Kyverno's Policy-Reporter)

Scanning

CVE-Filtering-Options

  • Only fixed
  • Severity-Threshold
  • Ignorelist

Build / Security

Deployment

  • Plain Kubernetes-YAMLs
  • Helm-Chart
  • Built-in (but optional) ServiceMonitor for Prometheus-Operator CRD

Support for OCI Registries

Hi,
according to the documentation: This operator scans all SBOMs from a git-repository for vulnerabilities using Grype

The sbom-operator could generate a SBOM and store it into an OCI-Registry.

Do you think it is possible to support OCI Registry in vulnerability-operator

question: filter file conditions OR/AND

I think I understand the filter file conditions (because there is only way that makes sense to me), but I just wanted to confirm how it worked.

This audit and ignore are each basically a big OR (vulns/packages ORed), where each vuln/package is another OR (list of contexts ORed), where each context is an AND (image, namespace, etc ANDed).

For example, these two blocks are equivalent:

audit:
  - vulnerability: CVE-X
    context:
    - image: IMAGE1
      namespace: NAMESPACE1
  - vulnerability: CVE-X
    context:
    - image: IMAGE2
      namespace: NAMESPACE2
audit:
 - vulnerability: CVE-X
   context:
   - image: IMAGE1
     namespace: NAMESPACE1
   - image: IMAGE2
     namespace: NAMESPACE2

Correct?

Thanks

bug: logical errors in the filtering algorithm

I've been getting some incorrect results with filter files. Upon studying the code, I found that there are two key parts of the code which are flawed. I have generated my filter files according the the documentation and the answer here: #420. I have created a fix and will be submitting the pull request (edit: submitted, #445), but here I will document the issues I've been seeing.

The first issue originates here:

The error here is that there may be a filter in the array whose context array matches one container, and then another filter later in the array whose context array matches another container. However, the algorithm will shortcut out once it hits ANY filter which matches ANY container.

Secondly, there is a flaw here:

for _, ctx := range filter.Context {

The code here is basically saying "for each context, add to the applied list any containers which match. add to the not applied list any containers which don't match". For certain filter files, you end up adding containers to both the applied and not applied lists. The correct logic should be "for each container, if it matches any context, add it to the applied list. if it matches no context, add it to the not applied list.

I wrote test cases to illustrate these issues here: samcornwell@d13d251. The current release of the vulnerability-operator fails these tests.

Here they are, reformatted to be a little easier to read:

{
    config: FilterConfig{
        Audit: []VulnerabilityFilter{
            {
                Vulnerability: "CVE-2023-0215", 
                Context: []FilterContext{
                    {
                        Namespace: "monitoring", 
                        Kind: "Deployment", 
                        Name: "grafana"
                    }
                }
            }, 
            {
                Vulnerability: "CVE-2023-0215", 
                Context: []FilterContext{
                    {
                        Namespace: "monitoring", 
                        Kind: "Deployment", 
                        Name: "kube-state-metrics"
                    }
                }
            }
        }
    },
    vulnList: []vuln.Vulnerability{
        libcryptoVulnMultiDeployment
    },
    audited: []vuln.Vulnerability{
        {
            ID: "CVE-2023-0215", 
            Package: "libcrypto3", 
            Severity: "High", 
            Type: "apk", 
            ImageID: "alpine:3.17", 
            Containers: []kubernetes.ContainerInfo{
                {
                    Namespace: "monitoring", 
                    OwnerName: "grafana", 
                    OwnerKind: "Deployment", 
                    PodName: "grafana-6d54bf9447-t7dfc"
                }, 
                {
                    Namespace: "monitoring", 
                    OwnerName: "kube-state-metrics", 
                    OwnerKind: "Deployment", 
                    PodName: "kube-state-metrics-6d54bf9447-t7dfc"
                }
            }
        }
    },
    found: []vuln.Vulnerability{}
},
{
    config: FilterConfig{
        Audit: []VulnerabilityFilter{
            {
                Vulnerability: "CVE-2023-0215", 
                Context: []FilterContext{
                    {
                        Namespace: "monitoring", 
                        Kind: "Deployment", 
                        Name: "grafana"
                    }, 
                    {
                        Namespace: "monitoring", 
                        Kind: "Deployment", 
                        Name: "kube-state-metrics"
                    }
                }
            }
        }
    },
    vulnList: []vuln.Vulnerability{
        libcryptoVulnMultiDeployment
    },
    audited: []vuln.Vulnerability{
        {
            ID: "CVE-2023-0215", 
            Package: "libcrypto3", 
            Severity: "High", 
            Type: "apk", 
            ImageID: "alpine:3.17", 
            Containers: []kubernetes.ContainerInfo{
                {
                    Namespace: "monitoring", 
                    OwnerName: "grafana", 
                    OwnerKind: "Deployment", 
                    PodName: "grafana-6d54bf9447-t7dfc"
                }, 
                {
                    Namespace: "monitoring", 
                    OwnerName: "kube-state-metrics", 
                    OwnerKind: "Deployment", 
                    PodName: "kube-state-metrics-6d54bf9447-t7dfc"
                }
            }
        }
    },
    found: []vuln.Vulnerability{}
}

Fails to create metrics when cluster uses dockershim

In a cluster (still) using dockershim, the image ID is prefixed with docker-pullable:// by the runtime. As a consequence, the comparison in kubernetes.go#L74 fails for all images.

As a quick workaround, I removed the prefix from c.ImageID:

--- a/internal/vuln/kubernetes/kubernetes.go
+++ b/internal/vuln/kubernetes/kubernetes.go
@@ -2,6 +2,7 @@ package kubernetes

 import (
        "context"
+       "strings"

        "github.com/sirupsen/logrus"
        corev1 "k8s.io/api/core/v1"
@@ -71,7 +72,8 @@ func (client *KubeClient) GetContainersWithImage(imageID string) ([]ContainerInf
                statuses = append(statuses, p.Status.EphemeralContainerStatuses...)

                for _, c := range statuses {
-                       if c.ImageID == imageID {
+                       fixedImageID := strings.ReplaceAll(c.ImageID, "docker-pullable://", "")
+                       if fixedImageID == imageID {
                                infos = append(infos, ContainerInfo{
                                        Namespace:     p.Namespace,
                                        PodName:       p.Name,

If something like this makes sense as a patch, I'd happily create a PR. But I don't know if k8s.io/client-go offers something more elegant.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.