ckotzbauer / sbom-operator Goto Github PK
View Code? Open in Web Editor NEWCatalogue all images of a Kubernetes cluster to multiple targets with Syft
License: MIT License
Catalogue all images of a Kubernetes cluster to multiple targets with Syft
License: MIT License
Hi @ckotzbauer. Thank you very much for your work trying to support Azure DevOps repos.
Unfortunately it seems to be not working for me.
This is the log output from a sbom-operator pod deployed into kubernetes:
time="2022-09-20T09:11:41Z" level=info msg="Commit: 0f4635d8a13131aa655e6096beb6f46a92199ce9"
time="2022-09-20T09:11:41Z" level=info msg="Built at: 2022-09-17T08:59:11Z"
time="2022-09-20T09:11:41Z" level=info msg="Built by: goreleaser"
time="2022-09-20T09:11:41Z" level=info msg="Go Version: go1.19"
time="2022-09-20T09:11:41Z" level=debug msg="Targets set to: [git]"
time="2022-09-20T09:11:41Z" level=info msg="Webserver is running at port 8080"
time="2022-09-20T09:11:41Z" level=info msg="Wait for cache to be synced"
time="2022-09-20T09:11:41Z" level=error msg="Open or clone failed" error="'git clone -b sbom-operator https://******@dev.azure.com/******/**********/_git/******************* /work/sbom' failed: Cloning
into '/work/sbom'..."
time="2022-09-20T09:11:41Z" level=info msg="Start pod-informer"
time="2022-09-20T09:11:41Z" level=info msg="Processing image ghcr.io/ckotzbauer/sbom-operator@sha256:846b4d38b700d4c995404ff038f97b1748e3018e234b1255d7989a8ed7647a2e"
time="2022-09-20T09:11:41Z" level=info msg="Finished cache sync"
time="2022-09-20T09:11:41Z" level=error msg="An error occurred while processing /work/sbom/aks/sbom" error="lstat /work/sbom/aks/sbom: permission denied"
time="2022-09-20T09:11:41Z" level=debug msg="Pod sbom-operator/sbom-operator-788b8f6697-7k59x needs to be analyzed"
time="2022-09-20T09:11:41Z" level=debug msg="Skip image ghcr.io/ckotzbauer/sbom-operator@sha256:846b4d38b700d4c995404ff038f97b1748e3018e234b1255d7989a8ed7647a2e"
time="2022-09-20T09:11:41Z" level=error msg="An error occurred while processing /work/sbom/aks/sbom" error="lstat /work/sbom/aks/sbom: permission denied"
time="2022-09-20T09:11:44Z" level=error msg="Directory could not be created" error="mkdir /work/sbom/aks: permission denied"
time="2022-09-20T09:12:16Z" level=info msg="Processing image registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47"
time="2022-09-20T09:12:18Z" level=error msg="Directory could not be created" error="mkdir /work/sbom/aks: permission denied"
time="2022-09-20T09:12:18Z" level=debug msg="Image registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 marked for removal"
time="2022-09-20T09:12:18Z" level=debug msg="Start to remove old SBOMs"
time="2022-09-20T09:12:18Z" level=error msg="Open failed" error="stat /work/sbom/.git: permission denied"
time="2022-09-20T09:12:18Z" level=debug msg="Deleted old SBOM: /work/sbom/aks/sbom/registry.k8s.io/ingress-nginx/kube-webhook-certgen/sha256_549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47/sbom.json"
Could it be that the plain git clone return is seen as an error by the wrapper code? Just guessing since it is saying 'Cloning into ...' as if it worked.
Deleting a pod results in error
time="2022-09-28T14:21:56Z" level=error msg="Could not load project: The project could not be found. (status: 404)"
In DependencyTrackTarget.Remove()
, populating g.imageProjectMap
fails and remains empty after g.LoadImages()
. As a result, dtrack project uuid for the image cannot be found.
There is a bug in DependencyTrackTarget.LoadImages()
:
imageId
is set to empty string at the beginning of project tag for loop. Therefore it will only be added to imageProjectMap
if raw-image-id
is the last tag in project.Tags
. However, the sbom-operator
usually follows raw-image-id
and resets imageId
To fix this particular issue imageId = ""
should be moved above the loop.
However, this is not enough to fix the actual problem that deleting a project from DependencyTrack fails.
Its seems there are inconsistencies in the imageProjectMap. Once sbom-operator is running, new pods successfully create new dtrack project, but the imageId and corresponding uuid are not added to imageProjectMap. When the pod gets deleted, the project uuid resolved from imageProjectMap is 00000000-0000-0000-0000-000000000000 - the default for empty uuid.UUID{}.String()
- which cannot be found in dtrack.
A potential fix could be to update imageProjectMap
at the end of ProcessSbom
.
Hey,
I love this project, I already rewrite it for nomad on my personal gitlab so maybe I can help you if possible.
Is your feature request related to a problem? Please describe. ๐
I would like this project to be able to be used on Docker swarm or Nomad to get SBOM from multiple providers.
Describe the solution you'd like โจ
In the code, a lot of things are related to Kubernetes (even naming ๐ฅฒ), maybe create a providers folder like for targets and put all kubernetes related things in the right file (In my rewrite, I totally deleted kubernetes related things ๐ that's why I can't submit a PR).
Additional context
To get images from multiple provider, you can read Diun's code which does almost the same job, get the image + version from providers.
Annotations are not available yet on nomad, so I don't have the exact same provider API than for Kubernetes.
Currently, the format is hardcoded to "json", but this should be configurable.
/kind feature
Hello,
It appears that even after the commit fdb76e5, the issue described in #290 is not completely resolved.
I am running the operator with the following configuration:
args:
targets: dtrack
dtrack-base-url: "{{ SBOM_DTRACK_URL }}"
dtrack-api-key: "<path:{{ SBOM_DTRACK_KEY_VAULT_PATH }}>"
kubernetes-cluster-id: "{{ DNS_ZONE }}"
registry-proxy: "docker.io=docker-proxy.company.net"
format: cyclonedx
verbosity: {{ SBOM_LOG_LEVEL }}
ignore-annotations: true
Initially, all images are correctly registered in dtrack, with all docker.io images being mapped to docker-proxy.company.net. However, they are soon marked for removal because the operator cannot find them on the cluster (the mapping is not applied during pod detection).
time="2024-05-30T09:27:50Z" level=info msg="Processing image docker.io/library/traefik@sha256:1957e3314f435c85b3a19f7babd53c630996aa1af65d1f479d75539251b1e112"
time="2024-05-30T09:28:11Z" level=info msg="Sending SBOM to Dependency Track (project=docker-proxy.company.net/traefik, version=v2.10.6)"
.....
time="2024-05-30T09:30:47Z" level=info msg="Removing kubernetes-cluster=bigeys-hp.company.net tag from project docker-proxy.company.net/traefik:v2.10.6"
time="2024-05-30T09:30:47Z" level=info msg="Image not running in any cluster - removing docker-proxy.company.net/traefik:v2.10.6"
Of course, the example traefik pod was running all the time.
Most likely the root cause of the problem is a lack of mapping in the informer function that loads pods from the cluster:
sbom-operator/internal/processor/processor.go
Line 365 in dc191d8
Proven to be working ok after a fix
Will open a PR
Hi,
First of all: thanks for this great work! ๐ฅณ
When running v0.9.0
with dependency-track as target, most public available images work fine, except for ECR hosted ones:
sbom-operator-77fdbbfd87-dbznp sbom-operator time="2022-04-20T10:09:53Z" level=error msg="Image-Pull failed" error="GET https://602401143452.dkr.ecr.eu-west-1.amazonaws.com/v2/amazon-k8s-cni-init/manifests/sha256:6c70af7bf257712105a89a896b2afb86c86ace865d32eb73765bf29163a08c56: unexpected status code 401 Unauthorized: Not Authorized\n"
This ECR repo is provided by AWS and should be available for everyone. Other private ECRs give the same 401 error.
Some information about the environment:
Can someone point me in te right direction? I'll add it to the README if useful for others!
๐๐ป - Are their any examples available of how to use the namespace-label-selector
and pod-label-selector
parameters to this? I've tried a few variations but get nothing but errors. So clearly i'm not understanding something here.
Config
Attempt 1
args:
targets: dtrack
format: cyclonedx
dtrack-base-url: my-cool-url
dtrack-api-key: notanapikey
kubernetes-cluster-id: cluster-name
namespace-label-selector: 'notin kube-system'
time="2023-06-06T00:30:00Z" level=info msg="Execute background-service"
time="2023-06-06T00:30:00Z" level=error msg="failed to list namespaces with selector: notin kube-system, abort background-service" error="failed to list namespaces: unable to parse requirement: found 'kube-system', expected: in, notin, =, ==, !=, gt, lt"
Attempt 2
args:
targets: dtrack
format: cyclonedx
dtrack-base-url: my-cool-url
dtrack-api-key: notanapikey
kubernetes-cluster-id: cluster-name
namespace-label-selector: 'notin (kube-system)'
time="2023-06-05T19:30:00Z" level=info msg="Execute background-service"
time="2023-06-05T19:30:00Z" level=error msg="failed to list namespaces with selector: notin (kube-system), abort background-service" error="failed to list namespaces: unable to parse requirement: found '(', expected: in, notin, =, ==, !=, gt, lt"
Hi, I have been trying to scan few clusters and publish to dependency track the results but I am not sure how to identify images that exist in multiple clusters at same time.
Could you tell me if this is possible or maybe using a custom project name in dtrack ? something like kubernetes cluster name + image name ?
Hi, can you please provide information upon which time interval by-default the cron job runs, and which fortmat currently in quartz or liunx standard.
As we already discussed this idea in Slack, dropping here so we do not forget!
Instead of run operator as scheduled runs using CronJob, we can use NewSharedIndexInformer
to create an event loop in order to detect image changes instantly. For example;
deploymentInformer := cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options metaV1.ListOptions) (runtime.Object, error) {
return clientSet.AppsV1().Deployments(metaV1.NamespaceAll).List(ctx, options)
},
WatchFunc: func(options metaV1.ListOptions) (watch.Interface, error) {
return clientSet.AppsV1().Deployments(metaV1.NamespaceAll).Watch(ctx, options)
},
},
&appsV1.Deployment{},
0,
cache.Indexers{},
)
Eventually we are able to get all add/update/remove events:
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
//
},
UpdateFunc: func(old, new interface{}) {
//
},
DeleteFunc: func(obj interface{}) {
//
},
})
P.S: We also do not want to drop scheduled runs. But both features are actually mutually-exclusive. So one of run type must be chosen in the config.
I am trying to scan images that are located in a private harbor registry but can't get auth to work.
This is the error message i get:
time="2023-09-21T07:09:39Z" level=error msg="Source-Creation failed" error="failed to construct source from input registry:myregistry.com/myimage@sha256:0587e7c68dce109536c16a222f50e0459cb062cb08ff323fa467292368429af4: could not fetch image \"myregistry.com/myimage@sha256:0587e7c68dce109536c16a222f50e0459cb062cb08ff323fa467292368429af4\": scheme \"registry\" specified; image retrieval using scheme parsing (myregistry.com/myimage@sha256:0587e7c68dce109536c16a222f50e0459cb062cb08ff323fa467292368429af4) was unsuccessful: unable to use OciRegistry source: failed to get image descriptor from registry: GET https://myregistry.com/v2/myimage/manifests/sha256:0587e7c68dce109536c16a222f50e0459cb062cb08ff323fa467292368429af4: UNAUTHORIZED: unauthorized to access repository: myimage, action: pull: unauthorized to access repository: myimage, action: pull; image retrieval without scheme parsing (registry:myregistry.com/myimage@sha256:0587e7c68dce109536c16a222f50e0459cb062cb08ff323fa467292368429af4) was unsuccessful: unable to determine image source"
I have a docker secret on the pod that is scanned, as well as on the sbom operator. Furthermore the same secret is configured as "fallback-pull-secret" in addition.
The secrets look like that:
apiVersion: v1
data:
.dockerconfigjson: xxxxx
kind: Secret
metadata:
creationTimestamp: "2023-09-15T16:38:08Z"
name: registry-credentials
namespace: vulnerability-operator
type: kubernetes.io/dockerconfigjson
The secret content is the following:
{
"auths": {
"https://registry.hub.docker.com/": {
"username": "myuser",
"password": "mypassword",
"auth": "myAuth"
},
"https://myregistry-otherlocation.com/": {
"username": "myuser",
"password": "mypassword",
"auth": "myAuth"
},
"https://myregistry.com/": {
"username": "myuser",
"password": "mypassword",
"auth": "myAuth"
}
}
}
Chart config:
image:
pullSecrets:
- name: registry-credentials
args:
git-author-email: [email protected]
git-author-name: sbom-operator
git-branch: master
git-repository: https://github.com/myCompany/sboms
targets: git
verbosity: info
fallback-pull-secret: registry-credentials
envVars:
- name: SBOM_GIT_ACCESS_TOKEN
valueFrom:
secretKeyRef:
name: git-access-token-sbom
key: token
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 1500m
memory: 3Gi
securityContext:
runAsUser: 1000
runAsNonRoot: true
seccompProfile: null
Could the sbom operator have issues if the secret contains multiple registry credentials? Because otherwise it is a valid docker config and also works everywhere else in the cluster.
Currently with git as the only target it is possible to only analyze images with digests which are not available in the git-repository yet.
With additional targets (e.g. Depdency Track) this is not (easy) possible anymore.
Suggestion:
annotations:
ckotzbauer.sbom-operator.io/<containername>: <containerdigest>
A container-image would be analyzed when the annotation for a particular container of the pod is missing or the digest differs from the current container-digest. To force a single image the annotation can be removed manually.
--ignore-annotations
to force analysis for all images (in case that there is a new target configured which has to be populated for the first time). After that the flag has to be removed./kind feature
A while ago I've asked from @wagoodman to describe how we can use syft as a go module, thanks to him, he created a gist for us. So, we can use syft as a go module instead of executing its binary.
๐ https://gist.github.com/wagoodman/57ed59a6d57600c23913071b8470175b
/kind documentation
It's not possible to scope a PAT to a particular repository. The token you put into kubernetes cluster can be used to access all of your private repositories. I was wondering if it's possible to use Github App authentication which has been scoped to the Git source.
Hi,
We install sbom-operator
with the helm chart. We have noticed thatsbom-operator
fails to use configmap
as a target because of missing rule in the clusterrole. So, it would be good to add a rule like the one below to the clusterrole template of the helm chart.
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- create
- list
- delete
We (w/@Dentrax) thought we could store SBOMs in a ConfigMap within the same cluster, so we should create a ConfigMap per container. One drawback of this approach would be the size limit of the ConfigMaps because Kubernetes allows us to store only 1MB of data within the ConfigMap.
In addition to the idea, Falco announced its plugin system that allows us to create plugins to extend Falco's data sources, so we thought we could use SBOMs as input for Falco. We should develop a plugin to consume SBOns stored in ConfigMaps and apply rules on top of them.
I was trying to test the new podLabelTagMatcher feature #172
However, I could not manage to get it working.
Reproduce with:
Given I add a new pod to the cluster with the following manifest
---
apiVersion: v1
kind: Pod
metadata:
namespace: test
name: 172-demo
labels:
foo: bar
spec:
containers:
- name: 172-demo
image: alpine:3.13.0
command:
- sh
- -c
args:
- while true; do sleep 10; done
When sbom-operator processes the pod for dtrack target
Then I expect ctx.Pod.Labels to contain the label "foo" with value "bar"
But instead ctx.Pod.Labels is an empty map
I was debugging newInfo.Labels
in processor.ListenForPods() to make sure the regex matcher itself is not the issue.
kubectl describe pod 172-demo
contains the label foo=bar
It would be great if we can add namespace name of the pod as a tag in the project created in dtrack.
Hello,
as promised, I've tried to use with proxy (#205):
here's my conf:
dtrack-base-url:
http://dependency-track-backend.dependency-track.svc.cluster.local
fallback-pull-secret: gitlab-docker-auth
format: cyclonedx
ignore-annotations: true
registry-proxy:
docker.io=docker.stable.lb.innovation.nif-cd.fr
targets: dtrack
The thing is at end of run, it'll suppress all docker.stable.lb.innovation.nif-cd.fr/*
projects in dtrack as they're not present on the cluster :(
time="2023-01-18T15:25:06Z" level=info msg="Removing kubernetes-cluster=default tag from project docker.stable.lb.innovation.nif-cd.fr/public-dockerhub/minio/operator:v4.5.8"
time="2023-01-18T15:25:06Z" level=info msg="Image not running in any cluster - removing docker.stable.lb.innovation.nif-cd.fr/public-dockerhub/minio/operator:v4.5.8"
but:
$ kubectl describe pod minio-operator-697dbbcc54-94hkh | grep Image:
Image: docker.io/istio/proxyv2:1.16.1
Image: docker.io/istio/proxyv2:1.16.1
Image: minio/operator:v4.5.8
I ran into an issue where I had sbom-operator
running in my cluster, and then we changed the repo we were using to be a fresh repo, and then it proceeded to skip every image and not populate the new repo. My original assumption is that the sbom-operator
would check the repo to see if the the image exists, and then skip it if it does.
Upon inspecting the source code, I found out the HasAnnotation
check, and then saw in documentation that there was an --ignore-annotation
flag. The only problem now is that it scans every image every time, which bogs down the operator.
So that leads to my question, is there a reason not to have the ability to skip the image if it's found in the repository? Shouldn't the sha265 hash guarantee that the image is an exact match in the cluster and the repository? If so, it seems the only way there would be a problem is if the sbom.json
was manually manipulated and then committed, which seems like a case that shouldn't be considered. If I'm right about that, I was wondering if it makes sense to have the default behavior be the way I suggested, or perhaps a flag to enable it?
Or I could be wrong about the hash thing, I don't know.
Thanks
edit: IDK what the equivalent logic for the other targets would be, I'm just familiar with using the git
target
First off - thank you so much for this, stumbled upon this repo while I was researching how to add syft
into our CI pipeline to do some SBOM generation and you might've just saved me a few weeks of banging my head against the keyboard.
It would be nice if instead of using the entire image name + repo as the project name in creation, just the image name was used. When images are pulled from places like ECR/GAR/GHCR it makes the project names pretty long so it would be a 'nice-to-have' if it just set the project name as the container name when creating the project in Dependency Track.
so my-cool-app
vs https://{acct_id}.dkr.ecr.{region}.amazonaws.com/my-cool-app:{tag}
as an ECR example.
Or maybe even some regex to pass in to override the default behavior so you don't have to maintain a list of regexes to parse image names.
There are multiple possible options I have in mind:
/kind feature
Hello,
first thanks for the work!
I'm looking at it and I've found something that may be a blocker for us: use of mirrors
We deploy our kubernetes in airgap environment and we configure the mirrors in containerd.
That means that on the "kubernetes side", you don't see the mirrors but they are present...
Would it be feasible to add a configuration for mirror registries?
would be awesome :)
With the current Dependency Track integration all scanned images get added as projects/versions to Dependency Track - but they never get removed.
We should somehow track the active projects/versions and delete the old ones. The delete logic should consider multiple clusters (or just multiple operators in a single cluster).
See discussion in the PR for initial Dependency Track integration: #25 (comment)
Secrets with type = "kubernetes.io/dockercfg"
containing the pull secret in the field .dockercfg
do not work.
OpenShift uses this type of secret for the internal registry - so, you cannot provide the type = "kubernetes.io/dockerconfigjson"
.
Our Kubernetes cluster is behind a http proxy. When sbom-operator (or syft?) tries do download the image a time out error is reported as in the following log record. How to configure sbom-operator to use an outbound http proxy to get the image?
time="2023-11-19T21:24:43Z" level=error msg="Source-Creation failed" error="failed to construct source from input registry:k8s.gcr.io/metrics-server/metrics-server@sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00: unable to load image: unable to use OciRegistry source: failed to get image descriptor from registry: Get \"https://k8s.gcr.io/v2/\": dial tcp 142.251.12.82:443: connect: connection timed out"
Can we get a new patch release for #442?
This is currently blocking us from going to production with the sbom-operator.
Do not only remove the sbom.json and remove the folder with the digest instead. If there are any files for a image (e.g. a README) they should be deleted too.
/kind bug
When I try to configure a private Azure DevOps git repo as a target cloning fails with the following message:
level=error msg="Open or clone failed" error="unexpected client error: unexpected requesting \"https://*******@dev.azure.com/****/*************/_git/**************/git-upload-pack\" status code: 400"
I suspect the go-git lib to be responsible here as other people report similar issues:
src-d/go-git#335
This seems to have been solved for other projects by falling back to git client:
argoproj/argo-cd#1244
/kind bug
Introduced in 0.3.0
Greetings!
we are happily using the sbom-operator with DependencyTrack in multiple Kubernetes (Kops & EKS) clusters. We want to scan as much workloads as possible so we keep our label selectors for the sbom-operator quite lose.
As of recently, we have noticed that our DependencyTrack instance was running quite slow and the connected database had a very high and constant CPU usage.
After investigating, we found that the number of tags
in the corresponding table TAGS
in DependencyTrack has grown to about 480 000 entries. As queries related to this table amount to almost all our CPU usage, we've analysed the content further and found that 2/3 of tags stored are controller-uid=SOME_UUI
and the rest is mostly job-name=SOME_KUBERNETES_JOB_NAME
. In our opinion, those two labels are created by several CronJobs we have running in our clusters. Each time a CronJob is triggered, it will create a new job (job-name
) and a new pod. This pod will then inherit both the job-name
label as well as the controller-uid
from the new job.
Since we want to know which images and dependencies are using in our CronJob, we want to keep scanning them if possible.
The sbom-operator currently appends all labels of a given pod to the tags of the project in DependencyTrack, as can be seen here: https://github.com/ckotzbauer/sbom-operator/blob/main/internal/target/dtrack/dtrack_target.go#L153
Our suggestion would be to add a new configuration that allows filtering of labels that get transformed to tags.
label-filter
(regular expression) - Allows filtering of labels that get converted to tags in the target system (e.g. DependencyTrack). This can be used to include only required labels or exclude noisy labels like controller-uid
on pods created by CronJobs. To exclude controller-uid
as well as job-name
, set the filter to: (?i:(controller-uid|job-name))
If desired, we can create a Pull Request that implements this feature, otherwise we can also support in testing or reviewing it.
Thank you for your support and have a great day,
Florian.
Hello - thank you for starting this project - it has saved me from attempting to build the same thing! โค๏ธ
Would you be open to a contribution to allow SBOM generation from AWS Lambda functions?
Broadly, something like:
GetFunction
operation, to obtain the Code.Location
URLThis would enable use of this tool in an environment in which there is a mix of Kubernetes workloads and serverless ones.
I wanted to guage your interest in whether this aligns with your project goals, before contributing a PR.
Image ID's are retrieved as follows:
sbom-operator/internal/kubernetes/kubernetes.go
Lines 106 to 122 in 151dde7
And they are parsed as follows:
sbom-operator/internal/syft/syft.go
Line 42 in 151dde7
I am seeing the following error logs:
time="2022-04-06T19:40:03Z" level=error msg="Could not parse imageID docker-pullable://<rest of image id>" error="Error parsing reference: \"docker-pullable://<rest of image id>\" is not a valid repository/tag"
I believe this is because the Image ID is prefixed with docker-pullable://
I writing a small program to parse <rest of image id>
using the same library used by sbom-operator: docker-parser
and it works fine.
I think the simplest fix to this is to either remove the docker-pullable://
prefix or use the img.Image
field instead.
#18 introduced the tracking of progress via annotations - this needs "get" and "update" on pods. These two are missing in the deploy/rbac.yaml
file
Your ClusterRoleBinding have error - namespace not specified. it causes installation errors
I'm currently using the sbom-operator for real-time scanning as described in the README:
https://github.com/ckotzbauer/sbom-operator/blob/main/README.md#real-time
I noticed the problem that the operator will fill up the /tmp/
directory without ever cleaning up after scanning. Currently I have ~400 stereoscope-<number>
directories in /tmp
in a relatively small cluster. The current workaround is to regularly delete the pod to clean these up.
It seems that the problem was also known upstream, but should be fixed by now(?), though I haven't really found why the cleanup doesn't take place using this operator.
Ideas, guidance and maybe a solution would be more than welcome.
Instead of storing the generated SBOMs to Git it would be good to support different targets, e.g. "Dependency Track".
/kind feature
/cc @stevespringett
hi all,
We use DTrack behind an Nginx Reverse Proxy to be able to use SSL.
For reasons our dtrack-base-url is 'https://dtrack.example.com/api', but unfortunately it seems that everything following the / is truncated, no matter what i enter the sbom-operator uses the url "https://dtrack.example.com/api/v1/bom", instead of "https://dtrack.example.com/api/api/v1/bom" and is not able to Upload because of an DTrack API 400 Error.
Does anyone have a tip or hint for me?
regards
Sascha
As a System Operator,
I would like to add pod labels as project tags in DependencyTrack
So that grouping/filtering by label in dtrack is possible
Background: We use k8s pod labels to determine and group things like application, stage, department, ...
Given the following deployment was applied to k8s cluster:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dependencytrack-frontend
namespace: dependencytrack-sales-live
spec:
template:
metadata:
labels:
app=dependencytrack
stage=live
department=sales
service=inventory
spec:
containers:
- name: dependencytrack-frontend
image: dependencytrack-frontend:4.5.1
---
When the sbom-operator scans the pod and adds the project to dtrack
Then the following tags should be added to dtrack:
[namespace=dependencytrack-sales-live, app=dependencytrack, stage=live, department=sales, ...]
But currently only the following tags are added:
[namespace=dependencytrack-sales-live, ...]
What do you think about the idea of adding Labels map[string]string
to struct libk8s.PodInfo
and allow custom mapping of labels to dtrack project tags?
What would be an appropriate way to configure the custom mapping?
sbom-operator is awesome. Thank you!
Hello @ckotzbauer, in cosign there is a support for attaching SBOM files to an OCI registry along with an image. So, maybe we can support that too as an alternative way of storing SBOM files instead of just storing them in git. WDYT?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.